doc_content
stringlengths
1
386k
doc_id
stringlengths
5
188
tf.compat.v1.estimator.BaselineEstimator An estimator that can establish a simple baseline. Inherits From: Estimator tf.compat.v1.estimator.BaselineEstimator( head, model_dir=None, optimizer='Ftrl', config=None ) The estimator uses a user-specified head. This estimator ignores feature values and will learn to predict the average value of each label. E.g. for single-label classification problems, this will predict the probability distribution of the classes as seen in the labels. For multi-label classification problems, it will predict the ratio of examples that contain each class. Example: # Build baseline multi-label classifier. estimator = tf.estimator.BaselineEstimator( head=tf.estimator.MultiLabelHead(n_classes=3)) # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass # Fit model. estimator.train(input_fn=input_fn_train) # Evaluates cross entropy between the test and train labels. loss = estimator.evaluate(input_fn=input_fn_eval)["loss"] # For each class, predicts the ratio of training examples that contain the # class. predictions = estimator.predict(new_samples) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is specified in the head constructor (and not None) for the head passed to BaselineEstimator's constructor, a feature with key=weight_column whose value is a Tensor. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.baselineestimator
tf.compat.v1.estimator.BaselineRegressor A regressor that can establish a simple baseline. Inherits From: Estimator tf.compat.v1.estimator.BaselineRegressor( model_dir=None, label_dimension=1, weight_column=None, optimizer='Ftrl', config=None, loss_reduction=tf.compat.v1.losses.Reduction.SUM ) This regressor ignores feature values and will learn to predict the average value of each label. Example: # Build BaselineRegressor regressor = tf.estimator.BaselineRegressor() # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass # Fit model. regressor.train(input_fn=input_fn_train) # Evaluate squared-loss between the test and train targets. loss = regressor.evaluate(input_fn=input_fn_eval)["loss"] # predict outputs the mean value seen during training. predictions = regressor.predict(new_samples) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.baselineregressor
tf.compat.v1.estimator.classifier_parse_example_spec Generates parsing spec for tf.parse_example to be used with classifiers. tf.compat.v1.estimator.classifier_parse_example_spec( feature_columns, label_key, label_dtype=tf.dtypes.int64, label_default=None, weight_column=None ) If users keep data in tf.Example format, they need to call tf.parse_example with a proper feature spec. There are two main things that this utility helps: Users need to combine parsing spec of features with labels and weights (if any) since they are all parsed from same tf.Example instance. This utility combines these specs. It is difficult to map expected label by a classifier such as DNNClassifier to corresponding tf.parse_example spec. This utility encodes it by getting related information from users (key, dtype). Example output of parsing spec: # Define features and transformations feature_b = tf.feature_column.numeric_column(...) feature_c_bucketized = tf.feature_column.bucketized_column( tf.feature_column.numeric_column("feature_c"), ...) feature_a_x_feature_c = tf.feature_column.crossed_column( columns=["feature_a", feature_c_bucketized], ...) feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c] parsing_spec = tf.estimator.classifier_parse_example_spec( feature_columns, label_key='my-label', label_dtype=tf.string) # For the above example, classifier_parse_example_spec would return the dict: assert parsing_spec == { "feature_a": parsing_ops.VarLenFeature(tf.string), "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) "my-label" : parsing_ops.FixedLenFeature([1], dtype=tf.string) } Example usage with a classifier: feature_columns = # define features via tf.feature_column estimator = DNNClassifier( n_classes=1000, feature_columns=feature_columns, weight_column='example-weight', label_vocabulary=['photos', 'keep', ...], hidden_units=[256, 64, 16]) # This label configuration tells the classifier the following: # * weights are retrieved with key 'example-weight' # * label is string and can be one of the following ['photos', 'keep', ...] # * integer id for label 'photos' is 0, 'keep' is 1, ... # Input builders def input_fn_train(): # Returns a tuple of features and labels. features = tf.contrib.learn.read_keyed_batch_features( file_pattern=train_files, batch_size=batch_size, # creates parsing configuration for tf.parse_example features=tf.estimator.classifier_parse_example_spec( feature_columns, label_key='my-label', label_dtype=tf.string, weight_column='example-weight'), reader=tf.RecordIOReader) labels = features.pop('my-label') return features, labels estimator.train(input_fn=input_fn_train) Args feature_columns An iterable containing all feature columns. All items should be instances of classes derived from FeatureColumn. label_key A string identifying the label. It means tf.Example stores labels with this key. label_dtype A tf.dtype identifies the type of labels. By default it is tf.int64. If user defines a label_vocabulary, this should be set as tf.string. tf.float32 labels are only supported for binary classification. label_default used as label if label_key does not exist in given tf.Example. An example usage: let's say label_key is 'clicked' and tf.Example contains clicked data only for positive examples in following format key:clicked, value:1. This means that if there is no data with key 'clicked' it should count as negative example by setting label_deafault=0. Type of this value should be compatible with label_dtype. weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor. Returns A dict mapping each feature key to a FixedLenFeature or VarLenFeature value. Raises ValueError If label is used in feature_columns. ValueError If weight_column is used in feature_columns. ValueError If any of the given feature_columns is not a _FeatureColumn instance. ValueError If weight_column is not a NumericColumn instance. ValueError if label_key is None.
tensorflow.compat.v1.estimator.classifier_parse_example_spec
tf.compat.v1.estimator.DNNClassifier A classifier for TensorFlow DNN models. Inherits From: Estimator tf.compat.v1.estimator.DNNClassifier( hidden_units, feature_columns, model_dir=None, n_classes=2, weight_column=None, label_vocabulary=None, optimizer='Adagrad', activation_fn=tf.nn.relu, dropout=None, input_layer_partitioner=None, config=None, warm_start_from=None, loss_reduction=tf.compat.v1.losses.Reduction.SUM, batch_norm=False ) Example: categorical_feature_a = categorical_column_with_hash_bucket(...) categorical_feature_b = categorical_column_with_hash_bucket(...) categorical_feature_a_emb = embedding_column( categorical_column=categorical_feature_a, ...) categorical_feature_b_emb = embedding_column( categorical_column=categorical_feature_b, ...) estimator = tf.estimator.DNNClassifier( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256]) # Or estimator using the ProximalAdagradOptimizer optimizer with # regularization. estimator = tf.estimator.DNNClassifier( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate=0.1, l1_regularization_strength=0.001 )) # Or estimator using an optimizer with a learning rate decay. estimator = tf.estimator.DNNClassifier( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=lambda: tf.keras.optimizers.Adam( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96)) # Or estimator with warm-starting from a previous checkpoint. estimator = tf.estimator.DNNClassifier( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256], warm_start_from="/path/to/checkpoint/dir") # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train) metrics = estimator.evaluate(input_fn=input_fn_eval) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor. Loss is calculated by using softmax cross entropy. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.dnnclassifier
tf.compat.v1.estimator.DNNEstimator An estimator for TensorFlow DNN models with user-specified head. Inherits From: Estimator tf.compat.v1.estimator.DNNEstimator( head, hidden_units, feature_columns, model_dir=None, optimizer='Adagrad', activation_fn=tf.nn.relu, dropout=None, input_layer_partitioner=None, config=None, warm_start_from=None, batch_norm=False ) Example: sparse_feature_a = sparse_column_with_hash_bucket(...) sparse_feature_b = sparse_column_with_hash_bucket(...) sparse_feature_a_emb = embedding_column(sparse_id_column=sparse_feature_a, ...) sparse_feature_b_emb = embedding_column(sparse_id_column=sparse_feature_b, ...) estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256]) # Or estimator using the ProximalAdagradOptimizer optimizer with # regularization. estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate=0.1, l1_regularization_strength=0.001 )) # Or estimator using an optimizer with a learning rate decay. estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=lambda: tf.keras.optimizers.Adam( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96)) # Or estimator with warm-starting from a previous checkpoint. estimator = tf.estimator.DNNEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[sparse_feature_a_emb, sparse_feature_b_emb], hidden_units=[1024, 512, 256], warm_start_from="/path/to/checkpoint/dir") # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train) metrics = estimator.evaluate(input_fn=input_fn_eval) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor. Loss and predicted output are determined by the specified head. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.dnnestimator
tf.compat.v1.estimator.DNNLinearCombinedClassifier An estimator for TensorFlow Linear and DNN joined classification models. Inherits From: Estimator tf.compat.v1.estimator.DNNLinearCombinedClassifier( model_dir=None, linear_feature_columns=None, linear_optimizer='Ftrl', dnn_feature_columns=None, dnn_optimizer='Adagrad', dnn_hidden_units=None, dnn_activation_fn=tf.nn.relu, dnn_dropout=None, n_classes=2, weight_column=None, label_vocabulary=None, input_layer_partitioner=None, config=None, warm_start_from=None, loss_reduction=tf.compat.v1.losses.Reduction.SUM, batch_norm=False, linear_sparse_combiner='sum' ) Note: This estimator is also known as wide-n-deep. Example: numeric_feature = numeric_column(...) categorical_column_a = categorical_column_with_hash_bucket(...) categorical_column_b = categorical_column_with_hash_bucket(...) categorical_feature_a_x_categorical_feature_b = crossed_column(...) categorical_feature_a_emb = embedding_column( categorical_column=categorical_feature_a, ...) categorical_feature_b_emb = embedding_column( categorical_id_column=categorical_feature_b, ...) estimator = tf.estimator.DNNLinearCombinedClassifier( # wide settings linear_feature_columns=[categorical_feature_a_x_categorical_feature_b], linear_optimizer=tf.keras.optimizers.Ftrl(...), # deep settings dnn_feature_columns=[ categorical_feature_a_emb, categorical_feature_b_emb, numeric_feature], dnn_hidden_units=[1000, 500, 100], dnn_optimizer=tf.keras.optimizers.Adagrad(...), # warm-start settings warm_start_from="/path/to/checkpoint/dir") # To apply L1 and L2 regularization, you can set dnn_optimizer to: tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate=0.1, l1_regularization_strength=0.001, l2_regularization_strength=0.001) # To apply learning rate decay, you can set dnn_optimizer to a callable: lambda: tf.keras.optimizers.Adam( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96) # It is the same for linear_optimizer. # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train, steps=100) metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: for each column in dnn_feature_columns + linear_feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor. Loss is calculated by using softmax cross entropy. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.dnnlinearcombinedclassifier
tf.compat.v1.estimator.DNNLinearCombinedEstimator An estimator for TensorFlow Linear and DNN joined models with custom head. Inherits From: Estimator tf.compat.v1.estimator.DNNLinearCombinedEstimator( head, model_dir=None, linear_feature_columns=None, linear_optimizer='Ftrl', dnn_feature_columns=None, dnn_optimizer='Adagrad', dnn_hidden_units=None, dnn_activation_fn=tf.nn.relu, dnn_dropout=None, input_layer_partitioner=None, config=None, linear_sparse_combiner='sum' ) Note: This estimator is also known as wide-n-deep. Example: numeric_feature = numeric_column(...) categorical_column_a = categorical_column_with_hash_bucket(...) categorical_column_b = categorical_column_with_hash_bucket(...) categorical_feature_a_x_categorical_feature_b = crossed_column(...) categorical_feature_a_emb = embedding_column( categorical_column=categorical_feature_a, ...) categorical_feature_b_emb = embedding_column( categorical_column=categorical_feature_b, ...) estimator = tf.estimator.DNNLinearCombinedEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), # wide settings linear_feature_columns=[categorical_feature_a_x_categorical_feature_b], linear_optimizer=tf.keras.optimizers.Ftrl(...), # deep settings dnn_feature_columns=[ categorical_feature_a_emb, categorical_feature_b_emb, numeric_feature], dnn_hidden_units=[1000, 500, 100], dnn_optimizer=tf.keras.optimizers.Adagrad(...)) # To apply L1 and L2 regularization, you can set dnn_optimizer to: tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate=0.1, l1_regularization_strength=0.001, l2_regularization_strength=0.001) # To apply learning rate decay, you can set dnn_optimizer to a callable: lambda: tf.keras.optimizers.Adam( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96) # It is the same for linear_optimizer. # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train, steps=100) metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: for each column in dnn_feature_columns + linear_feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor. Loss is calculated by using mean squared error. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.dnnlinearcombinedestimator
tf.compat.v1.estimator.DNNLinearCombinedRegressor An estimator for TensorFlow Linear and DNN joined models for regression. Inherits From: Estimator tf.compat.v1.estimator.DNNLinearCombinedRegressor( model_dir=None, linear_feature_columns=None, linear_optimizer='Ftrl', dnn_feature_columns=None, dnn_optimizer='Adagrad', dnn_hidden_units=None, dnn_activation_fn=tf.nn.relu, dnn_dropout=None, label_dimension=1, weight_column=None, input_layer_partitioner=None, config=None, warm_start_from=None, loss_reduction=tf.compat.v1.losses.Reduction.SUM, batch_norm=False, linear_sparse_combiner='sum' ) Note: This estimator is also known as wide-n-deep. Example: numeric_feature = numeric_column(...) categorical_column_a = categorical_column_with_hash_bucket(...) categorical_column_b = categorical_column_with_hash_bucket(...) categorical_feature_a_x_categorical_feature_b = crossed_column(...) categorical_feature_a_emb = embedding_column( categorical_column=categorical_feature_a, ...) categorical_feature_b_emb = embedding_column( categorical_column=categorical_feature_b, ...) estimator = tf.estimator.DNNLinearCombinedRegressor( # wide settings linear_feature_columns=[categorical_feature_a_x_categorical_feature_b], linear_optimizer=tf.keras.optimizers.Ftrl(...), # deep settings dnn_feature_columns=[ categorical_feature_a_emb, categorical_feature_b_emb, numeric_feature], dnn_hidden_units=[1000, 500, 100], dnn_optimizer=tf.keras.optimizers.Adagrad(...), # warm-start settings warm_start_from="/path/to/checkpoint/dir") # To apply L1 and L2 regularization, you can set dnn_optimizer to: tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate=0.1, l1_regularization_strength=0.001, l2_regularization_strength=0.001) # To apply learning rate decay, you can set dnn_optimizer to a callable: lambda: tf.keras.optimizers.Adam( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96) # It is the same for linear_optimizer. # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train, steps=100) metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: for each column in dnn_feature_columns + linear_feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor. Loss is calculated by using mean squared error. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.dnnlinearcombinedregressor
tf.compat.v1.estimator.DNNRegressor A regressor for TensorFlow DNN models. Inherits From: Estimator tf.compat.v1.estimator.DNNRegressor( hidden_units, feature_columns, model_dir=None, label_dimension=1, weight_column=None, optimizer='Adagrad', activation_fn=tf.nn.relu, dropout=None, input_layer_partitioner=None, config=None, warm_start_from=None, loss_reduction=tf.compat.v1.losses.Reduction.SUM, batch_norm=False ) Example: categorical_feature_a = categorical_column_with_hash_bucket(...) categorical_feature_b = categorical_column_with_hash_bucket(...) categorical_feature_a_emb = embedding_column( categorical_column=categorical_feature_a, ...) categorical_feature_b_emb = embedding_column( categorical_column=categorical_feature_b, ...) estimator = tf.estimator.DNNRegressor( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256]) # Or estimator using the ProximalAdagradOptimizer optimizer with # regularization. estimator = tf.estimator.DNNRegressor( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=tf.compat.v1.train.ProximalAdagradOptimizer( learning_rate=0.1, l1_regularization_strength=0.001 )) # Or estimator using an optimizer with a learning rate decay. estimator = tf.estimator.DNNRegressor( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256], optimizer=lambda: tf.keras.optimizers.Adam( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96)) # Or estimator with warm-starting from a previous checkpoint. estimator = tf.estimator.DNNRegressor( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256], warm_start_from="/path/to/checkpoint/dir") # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train) metrics = estimator.evaluate(input_fn=input_fn_eval) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor. Loss is calculated by using mean squared error. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.dnnregressor
tf.compat.v1.estimator.Estimator Estimator class to train and evaluate TensorFlow models. tf.compat.v1.estimator.Estimator( model_fn, model_dir=None, config=None, params=None, warm_start_from=None ) The Estimator object wraps a model which is specified by a model_fn, which, given inputs and a number of other parameters, returns the ops necessary to perform training, evaluation, or predictions. All outputs (checkpoints, event files, etc.) are written to model_dir, or a subdirectory thereof. If model_dir is not set, a temporary directory is used. The config argument can be passed tf.estimator.RunConfig object containing information about the execution environment. It is passed on to the model_fn, if the model_fn has a parameter named "config" (and input functions in the same manner). If the config parameter is not passed, it is instantiated by the Estimator. Not passing config means that defaults useful for local execution are used. Estimator makes config available to the model (for instance, to allow specialization based on the number of workers available), and also uses some of its fields to control internals, especially regarding checkpointing. The params argument contains hyperparameters. It is passed to the model_fn, if the model_fn has a parameter named "params", and to the input functions in the same manner. Estimator only passes params along, it does not inspect it. The structure of params is therefore entirely up to the developer. None of Estimator's methods can be overridden in subclasses (its constructor enforces this). Subclasses should use model_fn to configure the base class, and may add methods implementing specialized functionality. See estimators for more information. To warm-start an Estimator: estimator = tf.estimator.DNNClassifier( feature_columns=[categorical_feature_a_emb, categorical_feature_b_emb], hidden_units=[1024, 512, 256], warm_start_from="/path/to/checkpoint/dir") For more details on warm-start configuration, see tf.estimator.WarmStartSettings. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Calling methods of Estimator will work while eager execution is enabled. However, the model_fn and input_fn is not executed eagerly, Estimator will switch to graph mode before calling all user-provided functions (incl. hooks), so their code has to be compatible with graph mode execution. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.estimator
Module: tf.compat.v1.estimator.experimental Public API for tf.estimator.experimental namespace. Classes class InMemoryEvaluatorHook: Hook to run evaluation in training without a checkpoint. class KMeans: An Estimator for K-Means clustering. class LinearSDCA: Stochastic Dual Coordinate Ascent helper for linear estimators. Functions build_raw_supervised_input_receiver_fn(...): Build a supervised_input_receiver_fn for raw features and labels. call_logit_fn(...): Calls logit_fn (experimental). dnn_logit_fn_builder(...): Function builder for a dnn logit_fn. linear_logit_fn_builder(...): Function builder for a linear logit_fn. make_early_stopping_hook(...): Creates early-stopping hook. make_stop_at_checkpoint_step_hook(...): Creates a proper StopAtCheckpointStepHook based on chief status. stop_if_higher_hook(...): Creates hook to stop if the given metric is higher than the threshold. stop_if_lower_hook(...): Creates hook to stop if the given metric is lower than the threshold. stop_if_no_decrease_hook(...): Creates hook to stop if metric does not decrease within given max steps. stop_if_no_increase_hook(...): Creates hook to stop if metric does not increase within given max steps.
tensorflow.compat.v1.estimator.experimental
tf.compat.v1.estimator.experimental.dnn_logit_fn_builder Function builder for a dnn logit_fn. tf.compat.v1.estimator.experimental.dnn_logit_fn_builder( units, hidden_units, feature_columns, activation_fn, dropout, input_layer_partitioner, batch_norm ) Args units An int indicating the dimension of the logit layer. In the MultiHead case, this should be the sum of all component Heads' logit dimensions. hidden_units Iterable of integer number of hidden units per layer. feature_columns Iterable of feature_column._FeatureColumn model inputs. activation_fn Activation function applied to each layer. dropout When not None, the probability we will drop out a given coordinate. input_layer_partitioner Partitioner for input layer. batch_norm Whether to use batch normalization after each hidden layer. Returns A logit_fn (see below). Raises ValueError If units is not an int.
tensorflow.compat.v1.estimator.experimental.dnn_logit_fn_builder
tf.compat.v1.estimator.experimental.KMeans An Estimator for K-Means clustering. Inherits From: Estimator tf.compat.v1.estimator.experimental.KMeans( num_clusters, model_dir=None, initial_clusters=RANDOM_INIT, distance_metric=SQUARED_EUCLIDEAN_DISTANCE, seed=None, use_mini_batch=True, mini_batch_steps_per_iteration=1, kmeans_plus_plus_num_retries=2, relative_tolerance=None, config=None, feature_columns=None ) Example: import numpy as np import tensorflow as tf num_points = 100 dimensions = 2 points = np.random.uniform(0, 1000, [num_points, dimensions]) def input_fn(): return tf.compat.v1.train.limit_epochs( tf.convert_to_tensor(points, dtype=tf.float32), num_epochs=1) num_clusters = 5 kmeans = tf.compat.v1.estimator.experimental.KMeans( num_clusters=num_clusters, use_mini_batch=False) # train num_iterations = 10 previous_centers = None for _ in xrange(num_iterations): kmeans.train(input_fn) cluster_centers = kmeans.cluster_centers() if previous_centers is not None: print 'delta:', cluster_centers - previous_centers previous_centers = cluster_centers print 'score:', kmeans.score(input_fn) print 'cluster centers:', cluster_centers # map the input points to their clusters cluster_indices = list(kmeans.predict_cluster_index(input_fn)) for i, point in enumerate(points): cluster_index = cluster_indices[i] center = cluster_centers[cluster_index] print 'point:', point, 'is in cluster', cluster_index, 'centered at', center The SavedModel saved by the export_saved_model method does not include the cluster centers. However, the cluster centers may be retrieved by the latest checkpoint saved during training. Specifically, kmeans.cluster_centers() is equivalent to tf.train.load_variable( kmeans.model_dir, KMeansClustering.CLUSTER_CENTERS_VAR_NAME) Args num_clusters An integer tensor specifying the number of clusters. This argument is ignored if initial_clusters is a tensor or numpy array. model_dir The directory to save the model results and log files. initial_clusters Specifies how the initial cluster centers are chosen. One of the following: * a tensor or numpy array with the initial cluster centers. * a callable f(inputs, k) that selects and returns up to k centers from an input batch. f is free to return any number of centers from 0 to k. It will be invoked on successive input batches as necessary until all num_clusters centers are chosen. KMeansClustering.RANDOM_INIT: Choose centers randomly from an input batch. If the batch size is less than num_clusters then the entire batch is chosen to be initial cluster centers and the remaining centers are chosen from successive input batches. KMeansClustering.KMEANS_PLUS_PLUS_INIT: Use kmeans++ to choose centers from the first input batch. If the batch size is less than num_clusters, a TensorFlow runtime error occurs. distance_metric The distance metric used for clustering. One of: KMeansClustering.SQUARED_EUCLIDEAN_DISTANCE: Euclidean distance between vectors u and v is defined as \(||u - v||_2\) which is the square root of the sum of the absolute squares of the elements' difference. KMeansClustering.COSINE_DISTANCE: Cosine distance between vectors u and v is defined as \(1 - (u . v) / (||u||_2 ||v||_2)\). seed Python integer. Seed for PRNG used to initialize centers. use_mini_batch A boolean specifying whether to use the mini-batch k-means algorithm. See explanation above. mini_batch_steps_per_iteration The number of steps after which the updated cluster centers are synced back to a master copy. Used only if use_mini_batch=True. See explanation above. kmeans_plus_plus_num_retries For each point that is sampled during kmeans++ initialization, this parameter specifies the number of additional points to draw from the current distribution before selecting the best. If a negative value is specified, a heuristic is used to sample O(log(num_to_sample)) additional points. Used only if initial_clusters=KMeansClustering.KMEANS_PLUS_PLUS_INIT. relative_tolerance A relative tolerance of change in the loss between iterations. Stops learning if the loss changes less than this amount. This may not work correctly if use_mini_batch=True. config See tf.estimator.Estimator. feature_columns An optionable iterable containing all the feature columns used by the model. All items in the set should be feature column instances that can be passed to tf.feature_column.input_layer. If this is None, all features will be used. Raises ValueError An invalid argument was passed to initial_clusters or distance_metric. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods cluster_centers View source cluster_centers() Returns the cluster centers. eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. predict_cluster_index View source predict_cluster_index( input_fn ) Finds the index of the closest cluster center to each input point. Args input_fn Input points. See tf.estimator.Estimator.predict. Yields The index of the closest cluster center for each input point. score View source score( input_fn ) Returns the sum of squared distances to nearest clusters. Note that this function is different from the corresponding one in sklearn which returns the negative sum. Args input_fn Input points. See tf.estimator.Estimator.evaluate. Only one batch is retrieved. Returns The sum of the squared distance from each point in the first batch of inputs to its nearest cluster center. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0. transform View source transform( input_fn ) Transforms each input point to its distances to all cluster centers. Note that if distance_metric=KMeansClustering.SQUARED_EUCLIDEAN_DISTANCE, this function returns the squared Euclidean distance while the corresponding sklearn function returns the Euclidean distance. Args input_fn Input points. See tf.estimator.Estimator.predict. Yields The distances from each input point to each cluster center. Class Variables ALL_DISTANCES 'all_distances' CLUSTER_CENTERS_VAR_NAME 'clusters' CLUSTER_INDEX 'cluster_index' COSINE_DISTANCE 'cosine' KMEANS_PLUS_PLUS_INIT 'kmeans_plus_plus' RANDOM_INIT 'random' SCORE 'score' SQUARED_EUCLIDEAN_DISTANCE 'squared_euclidean'
tensorflow.compat.v1.estimator.experimental.kmeans
tf.compat.v1.estimator.experimental.linear_logit_fn_builder Function builder for a linear logit_fn. tf.compat.v1.estimator.experimental.linear_logit_fn_builder( units, feature_columns, sparse_combiner='sum' ) Args units An int indicating the dimension of the logit layer. feature_columns An iterable containing all the feature columns used by the model. sparse_combiner A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum". Returns A logit_fn (see below).
tensorflow.compat.v1.estimator.experimental.linear_logit_fn_builder
Module: tf.compat.v1.estimator.export All public utility methods for exporting Estimator to SavedModel. This file includes functions and constants from core (model_utils) and export.py Classes class ClassificationOutput: Represents the output of a classification head. class ExportOutput: Represents an output of a model that can be served. class PredictOutput: Represents the output of a generic prediction head. class RegressionOutput: Represents the output of a regression head. class ServingInputReceiver: A return type for a serving_input_receiver_fn. class TensorServingInputReceiver: A return type for a serving_input_receiver_fn. Functions build_parsing_serving_input_receiver_fn(...): Build a serving_input_receiver_fn expecting fed tf.Examples. build_raw_serving_input_receiver_fn(...): Build a serving_input_receiver_fn expecting feature Tensors.
tensorflow.compat.v1.estimator.export
Module: tf.compat.v1.estimator.inputs Utility methods to create simple input_fns. Functions numpy_input_fn(...): Returns input function that would feed dict of numpy arrays into the model. pandas_input_fn(...): Returns input function that would feed Pandas DataFrame into the model.
tensorflow.compat.v1.estimator.inputs
tf.compat.v1.estimator.inputs.numpy_input_fn Returns input function that would feed dict of numpy arrays into the model. tf.compat.v1.estimator.inputs.numpy_input_fn( x, y=None, batch_size=128, num_epochs=1, shuffle=None, queue_capacity=1000, num_threads=1 ) This returns a function outputting features and targets based on the dict of numpy arrays. The dict features has the same keys as the x. The dict targets has the same keys as the y if y is a dict. Example: age = np.arange(4) * 1.0 height = np.arange(32, 36) x = {'age': age, 'height': height} y = np.arange(-32, -28) with tf.Session() as session: input_fn = numpy_io.numpy_input_fn( x, y, batch_size=2, shuffle=False, num_epochs=1) Args x numpy array object or dict of numpy array objects. If an array, the array will be treated as a single feature. y numpy array object or dict of numpy array object. None if absent. batch_size Integer, size of batches to return. num_epochs Integer, number of epochs to iterate over data. If None will run forever. shuffle Boolean, if True shuffles the queue. Avoid shuffle at prediction time. queue_capacity Integer, size of queue to accumulate. num_threads Integer, number of threads used for reading and enqueueing. In order to have predicted and repeatable order of reading and enqueueing, such as in prediction and evaluation mode, num_threads should be 1. Returns Function, that has signature of ()->(dict of features, targets) Raises ValueError if the shape of y mismatches the shape of values in x (i.e., values in x have same shape). ValueError if duplicate keys are in both x and y when y is a dict. ValueError if x or y is an empty dict. TypeError x is not a dict or array. ValueError if 'shuffle' is not provided or a bool.
tensorflow.compat.v1.estimator.inputs.numpy_input_fn
tf.compat.v1.estimator.inputs.pandas_input_fn Returns input function that would feed Pandas DataFrame into the model. tf.compat.v1.estimator.inputs.pandas_input_fn( x, y=None, batch_size=128, num_epochs=1, shuffle=None, queue_capacity=1000, num_threads=1, target_column='target' ) Note: y's index must match x's index. Args x pandas DataFrame object. y pandas Series object or DataFrame. None if absent. batch_size int, size of batches to return. num_epochs int, number of epochs to iterate over data. If not None, read attempts that would exceed this value will raise OutOfRangeError. shuffle bool, whether to read the records in random order. queue_capacity int, size of the read queue. If None, it will be set roughly to the size of x. num_threads Integer, number of threads used for reading and enqueueing. In order to have predicted and repeatable order of reading and enqueueing, such as in prediction and evaluation mode, num_threads should be 1. target_column str, name to give the target column y. This parameter is not used when y is a DataFrame. Returns Function, that has signature of ()->(dict of features, target) Raises ValueError if x already contains a column with the same name as y, or if the indexes of x and y don't match. ValueError if 'shuffle' is not provided or a bool.
tensorflow.compat.v1.estimator.inputs.pandas_input_fn
tf.compat.v1.estimator.LinearClassifier Linear classifier model. Inherits From: Estimator tf.compat.v1.estimator.LinearClassifier( feature_columns, model_dir=None, n_classes=2, weight_column=None, label_vocabulary=None, optimizer='Ftrl', config=None, partitioner=None, warm_start_from=None, loss_reduction=tf.compat.v1.losses.Reduction.SUM, sparse_combiner='sum' ) Train a linear model to classify instances into one of multiple possible classes. When number of possible classes is 2, this is binary classification. Example: categorical_column_a = categorical_column_with_hash_bucket(...) categorical_column_b = categorical_column_with_hash_bucket(...) categorical_feature_a_x_categorical_feature_b = crossed_column(...) # Estimator using the default optimizer. estimator = tf.estimator.LinearClassifier( feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b]) # Or estimator using the FTRL optimizer with regularization. estimator = tf.estimator.LinearClassifier( feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b], optimizer=tf.keras.optimizers.Ftrl( learning_rate=0.1, l1_regularization_strength=0.001 )) # Or estimator using an optimizer with a learning rate decay. estimator = tf.estimator.LinearClassifier( feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b], optimizer=lambda: tf.keras.optimizers.Ftrl( learning_rate=tf.exponential_decay( learning_rate=0.1, global_step=tf.get_global_step(), decay_steps=10000, decay_rate=0.96)) # Or estimator with warm-starting from a previous checkpoint. estimator = tf.estimator.LinearClassifier( feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b], warm_start_from="/path/to/checkpoint/dir") # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train) metrics = estimator.evaluate(input_fn=input_fn_eval) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a SparseColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedSparseColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a RealValuedColumn, a feature with key=column.name whose value is a Tensor. Loss is calculated by using softmax cross entropy. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.linearclassifier
tf.compat.v1.estimator.LinearEstimator An estimator for TensorFlow linear models with user-specified head. Inherits From: Estimator tf.compat.v1.estimator.LinearEstimator( head, feature_columns, model_dir=None, optimizer='Ftrl', config=None, partitioner=None, sparse_combiner='sum', warm_start_from=None ) Example: categorical_column_a = categorical_column_with_hash_bucket(...) categorical_column_b = categorical_column_with_hash_bucket(...) categorical_feature_a_x_categorical_feature_b = crossed_column(...) # Estimator using the default optimizer. estimator = tf.estimator.LinearEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b]) # Or estimator using an optimizer with a learning rate decay. estimator = tf.estimator.LinearEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b], optimizer=lambda: tf.keras.optimizers.Ftrl( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96)) # Or estimator using the FTRL optimizer with regularization. estimator = tf.estimator.LinearEstimator( head=tf.estimator.MultiLabelHead(n_classes=3), feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b]) optimizer=tf.keras.optimizers.Ftrl( learning_rate=0.1, l1_regularization_strength=0.001 )) def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train, steps=100) metrics = estimator.evaluate(input_fn=input_fn_eval, steps=10) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a CategoricalColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedCategoricalColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a DenseColumn, a feature with key=column.name whose value is a Tensor. Loss and predicted output are determined by the specified head. Args head A _Head instance constructed with a method such as tf.contrib.estimator.multi_label_head. feature_columns An iterable containing all the feature columns used by the model. All items in the set should be instances of classes derived from FeatureColumn. model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. optimizer An instance of tf.Optimizer used to train the model. Can also be a string (one of 'Adagrad', 'Adam', 'Ftrl', 'RMSProp', 'SGD'), or callable. Defaults to FTRL optimizer. config RunConfig object to configure the runtime settings. partitioner Optional. Partitioner for input layer. sparse_combiner A string specifying how to reduce if a categorical column is multivalent. One of "mean", "sqrtn", and "sum" -- these are effectively different ways to do example-level normalization, which can be useful for bag-of-words features. for more details, see tf.feature_column.linear_model. warm_start_from A string filepath to a checkpoint to warm-start from, or a WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all weights and biases are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.linearestimator
tf.compat.v1.estimator.LinearRegressor An estimator for TensorFlow Linear regression problems. Inherits From: Estimator tf.compat.v1.estimator.LinearRegressor( feature_columns, model_dir=None, label_dimension=1, weight_column=None, optimizer='Ftrl', config=None, partitioner=None, warm_start_from=None, loss_reduction=tf.compat.v1.losses.Reduction.SUM, sparse_combiner='sum' ) Train a linear regression model to predict label value given observation of feature values. Example: categorical_column_a = categorical_column_with_hash_bucket(...) categorical_column_b = categorical_column_with_hash_bucket(...) categorical_feature_a_x_categorical_feature_b = crossed_column(...) # Estimator using the default optimizer. estimator = tf.estimator.LinearRegressor( feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b]) # Or estimator using the FTRL optimizer with regularization. estimator = tf.estimator.LinearRegressor( feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b], optimizer=tf.keras.optimizers.Ftrl( learning_rate=0.1, l1_regularization_strength=0.001 )) # Or estimator using an optimizer with a learning rate decay. estimator = tf.estimator.LinearRegressor( feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b], optimizer=lambda: tf.keras.optimizers.Ftrl( learning_rate=tf.compat.v1.train.exponential_decay( learning_rate=0.1, global_step=tf.compat.v1.train.get_global_step(), decay_steps=10000, decay_rate=0.96)) # Or estimator with warm-starting from a previous checkpoint. estimator = tf.estimator.LinearRegressor( feature_columns=[categorical_column_a, categorical_feature_a_x_categorical_feature_b], warm_start_from="/path/to/checkpoint/dir") # Input builders def input_fn_train: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_eval: # Returns tf.data.Dataset of (x, y) tuple where y represents label's class # index. pass def input_fn_predict: # Returns tf.data.Dataset of (x, None) tuple. pass estimator.train(input_fn=input_fn_train) metrics = estimator.evaluate(input_fn=input_fn_eval) predictions = estimator.predict(input_fn=input_fn_predict) Input of train and evaluate should have following features, otherwise there will be a KeyError: if weight_column is not None, a feature with key=weight_column whose value is a Tensor. for each column in feature_columns: if column is a SparseColumn, a feature with key=column.name whose value is a SparseTensor. if column is a WeightedSparseColumn, two features: the first with key the id column name, the second with key the weight column name. Both features' value must be a SparseTensor. if column is a RealValuedColumn, a feature with key=column.name whose value is a Tensor. Loss is calculated by using mean squared error. Args model_fn Model function. Follows the signature: features -- This is the first item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same. labels -- This is the second item returned from the input_fn passed to train, evaluate, and predict. This should be a single tf.Tensor or dict of same (for multi-head models). If mode is tf.estimator.ModeKeys.PREDICT, labels=None will be passed. If the model_fn's signature does not accept mode, the model_fn must still be able to handle labels=None. mode -- Optional. Specifies if this is training, evaluation or prediction. See tf.estimator.ModeKeys. params -- Optional dict of hyperparameters. Will receive what is passed to Estimator in params parameter. This allows to configure Estimators from hyper parameter tuning. config -- Optional estimator.RunConfig object. Will receive what is passed to Estimator as its config parameter, or a default value. Allows setting up things in your model_fn based on configuration such as num_ps_replicas, or model_dir. Returns -- tf.estimator.EstimatorSpec model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into an estimator to continue training a previously saved model. If PathLike object, the path will be resolved. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config estimator.RunConfig configuration object. params dict of hyper parameters that will be passed into model_fn. Keys are names of parameters, values are basic python types. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If None, only TRAINABLE variables are warm-started. If the string filepath is provided instead of a tf.estimator.WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and tf.Tensor names are unchanged. Raises ValueError parameters of model_fn don't match params. ValueError if this is called via a subclass and if that class overrides a member of Estimator. Eager Compatibility Estimators can be used while eager execution is enabled. Note that input_fn and all hooks are executed inside a graph context, so they have to be written to be compatible with graph mode. Note that input_fn code using tf.data generally works in both graph and eager modes. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.linearregressor
tf.compat.v1.estimator.regressor_parse_example_spec Generates parsing spec for tf.parse_example to be used with regressors. tf.compat.v1.estimator.regressor_parse_example_spec( feature_columns, label_key, label_dtype=tf.dtypes.float32, label_default=None, label_dimension=1, weight_column=None ) If users keep data in tf.Example format, they need to call tf.parse_example with a proper feature spec. There are two main things that this utility helps: Users need to combine parsing spec of features with labels and weights (if any) since they are all parsed from same tf.Example instance. This utility combines these specs. It is difficult to map expected label by a regressor such as DNNRegressor to corresponding tf.parse_example spec. This utility encodes it by getting related information from users (key, dtype). Example output of parsing spec: # Define features and transformations feature_b = tf.feature_column.numeric_column(...) feature_c_bucketized = tf.feature_column.bucketized_column( tf.feature_column.numeric_column("feature_c"), ...) feature_a_x_feature_c = tf.feature_column.crossed_column( columns=["feature_a", feature_c_bucketized], ...) feature_columns = [feature_b, feature_c_bucketized, feature_a_x_feature_c] parsing_spec = tf.estimator.regressor_parse_example_spec( feature_columns, label_key='my-label') # For the above example, regressor_parse_example_spec would return the dict: assert parsing_spec == { "feature_a": parsing_ops.VarLenFeature(tf.string), "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) "my-label" : parsing_ops.FixedLenFeature([1], dtype=tf.float32) } Example usage with a regressor: feature_columns = # define features via tf.feature_column estimator = DNNRegressor( hidden_units=[256, 64, 16], feature_columns=feature_columns, weight_column='example-weight', label_dimension=3) # This label configuration tells the regressor the following: # * weights are retrieved with key 'example-weight' # * label is a 3 dimension tensor with float32 dtype. # Input builders def input_fn_train(): # Returns a tuple of features and labels. features = tf.contrib.learn.read_keyed_batch_features( file_pattern=train_files, batch_size=batch_size, # creates parsing configuration for tf.parse_example features=tf.estimator.classifier_parse_example_spec( feature_columns, label_key='my-label', label_dimension=3, weight_column='example-weight'), reader=tf.RecordIOReader) labels = features.pop('my-label') return features, labels estimator.train(input_fn=input_fn_train) Args feature_columns An iterable containing all feature columns. All items should be instances of classes derived from _FeatureColumn. label_key A string identifying the label. It means tf.Example stores labels with this key. label_dtype A tf.dtype identifies the type of labels. By default it is tf.float32. label_default used as label if label_key does not exist in given tf.Example. By default default_value is none, which means tf.parse_example will error out if there is any missing label. label_dimension Number of regression targets per example. This is the size of the last dimension of the labels and logits Tensor objects (typically, these have shape [batch_size, label_dimension]). weight_column A string or a NumericColumn created by tf.feature_column.numeric_column defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. If it is a string, it is used as a key to fetch weight tensor from the features. If it is a NumericColumn, raw tensor is fetched by key weight_column.key, then weight_column.normalizer_fn is applied on it to get weight tensor. Returns A dict mapping each feature key to a FixedLenFeature or VarLenFeature value. Raises ValueError If label is used in feature_columns. ValueError If weight_column is used in feature_columns. ValueError If any of the given feature_columns is not a _FeatureColumn instance. ValueError If weight_column is not a NumericColumn instance. ValueError if label_key is None.
tensorflow.compat.v1.estimator.regressor_parse_example_spec
Module: tf.compat.v1.estimator.tpu Public API for tf.estimator.tpu namespace. Modules experimental module: Public API for tf.estimator.tpu.experimental namespace. Classes class InputPipelineConfig: Please see the definition of these values in TPUConfig. class RunConfig: RunConfig with TPU support. class TPUConfig: TPU related configuration required by TPUEstimator. class TPUEstimator: Estimator with TPU support. class TPUEstimatorSpec: Ops and objects returned from a model_fn and passed to TPUEstimator.
tensorflow.compat.v1.estimator.tpu
Module: tf.compat.v1.estimator.tpu.experimental Public API for tf.estimator.tpu.experimental namespace. Classes class EmbeddingConfigSpec: Class to keep track of the specification for TPU embeddings.
tensorflow.compat.v1.estimator.tpu.experimental
tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec Class to keep track of the specification for TPU embeddings. tf.compat.v1.estimator.tpu.experimental.EmbeddingConfigSpec( feature_columns=None, optimization_parameters=None, clipping_limit=None, pipeline_execution_with_tensor_core=False, experimental_gradient_multiplier_fn=None, feature_to_config_dict=None, table_to_config_dict=None, partition_strategy='div', profile_data_directory=None ) Pass this class to tf.estimator.tpu.TPUEstimator via the embedding_config_spec parameter. At minimum you need to specify feature_columns and optimization_parameters. The feature columns passed should be created with some combination of tf.tpu.experimental.embedding_column and tf.tpu.experimental.shared_embedding_columns. TPU embeddings do not support arbitrary Tensorflow optimizers and the main optimizer you use for your model will be ignored for the embedding table variables. Instead TPU embeddigns support a fixed set of predefined optimizers that you can select from and set the parameters of. These include adagrad, adam and stochastic gradient descent. Each supported optimizer has a Parameters class in the tf.tpu.experimental namespace. column_a = tf.feature_column.categorical_column_with_identity(...) column_b = tf.feature_column.categorical_column_with_identity(...) column_c = tf.feature_column.categorical_column_with_identity(...) tpu_shared_columns = tf.tpu.experimental.shared_embedding_columns( [column_a, column_b], 10) tpu_non_shared_column = tf.tpu.experimental.embedding_column( column_c, 10) tpu_columns = [tpu_non_shared_column] + tpu_shared_columns ... def model_fn(features): dense_features = tf.keras.layers.DenseFeature(tpu_columns) embedded_feature = dense_features(features) ... estimator = tf.estimator.tpu.TPUEstimator( model_fn=model_fn, ... embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec( column=tpu_columns, optimization_parameters=( tf.estimator.tpu.experimental.AdagradParameters(0.1)))) Args feature_columns All embedding FeatureColumns used by model. optimization_parameters An instance of AdagradParameters, AdamParameters or StochasticGradientDescentParameters. This optimizer will be applied to all embedding variables specified by feature_columns. clipping_limit (Optional) Clipping limit (absolute value). pipeline_execution_with_tensor_core setting this to True makes training faster, but trained model will be different if step N and step N+1 involve the same set of embedding IDs. Please see tpu_embedding_configuration.proto for details. experimental_gradient_multiplier_fn (Optional) A Fn taking global step as input returning the current multiplier for all embedding gradients. feature_to_config_dict A dictionary mapping feature names to instances of the class FeatureConfig. Either features_columns or the pair of feature_to_config_dict and table_to_config_dict must be specified. table_to_config_dict A dictionary mapping feature names to instances of the class TableConfig. Either features_columns or the pair of feature_to_config_dict and table_to_config_dict must be specified. partition_strategy A string, determining how tensors are sharded to the tpu hosts. See tf.nn.safe_embedding_lookup_sparse for more details. Allowed value are "div" and "mod"'. If"mod"is used, evaluation and exporting the model to CPU will not work as expected. </td> </tr><tr> <td>profile_data_directory` Directory where embedding lookup statistics are stored. These statistics summarize information about the inputs to the embedding lookup operation, in particular, the average number of embedding IDs per example and how well the embedding IDs are load balanced across the system. The lookup statistics are used during TPU initialization for embedding table partitioning. Collection of lookup statistics is done at runtime by profiling the embedding inputs: only 3% of input samples are profiled to minimize host CPU overhead. Once a suitable number of samples are profiled, the lookup statistics are saved to table-specific files in the profile data directory generally at the end of a TPU training loop. The filename corresponding to each table is obtained by hashing table specific parameters (e.g., table name and number of features) and global configuration parameters (e.g., sharding strategy and task count). The same profile data directory can be shared among several models to reuse embedding lookup statistics. Raises ValueError If the feature_columns are not specified. TypeError If the feature columns are not of ths correct type (one of _SUPPORTED_FEATURE_COLUMNS, _TPU_EMBEDDING_COLUMN_CLASSES OR _EMBEDDING_COLUMN_CLASSES). ValueError If optimization_parameters is not one of the required types. Attributes feature_columns tensor_core_feature_columns optimization_parameters clipping_limit pipeline_execution_with_tensor_core experimental_gradient_multiplier_fn feature_to_config_dict table_to_config_dict partition_strategy profile_data_directory
tensorflow.compat.v1.estimator.tpu.experimental.embeddingconfigspec
tf.compat.v1.estimator.tpu.InputPipelineConfig Please see the definition of these values in TPUConfig. Class Variables BROADCAST 4 PER_HOST_V1 2 PER_HOST_V2 3 PER_SHARD_V1 1 SLICED 5
tensorflow.compat.v1.estimator.tpu.inputpipelineconfig
tf.compat.v1.estimator.tpu.RunConfig RunConfig with TPU support. Inherits From: RunConfig tf.compat.v1.estimator.tpu.RunConfig( tpu_config=None, evaluation_master=None, master=None, cluster=None, **kwargs ) Args tpu_config the TPUConfig that specifies TPU-specific configuration. evaluation_master a string. The address of the master to use for eval. Defaults to master if not set. master a string. The address of the master to use for training. cluster a ClusterResolver **kwargs keyword config parameters. Raises ValueError if cluster is not None and the provided session_config has a cluster_def already. Attributes checkpoint_save_graph_def cluster cluster_spec device_fn Returns the device_fn. If device_fn is not None, it overrides the default device function used in Estimator. Otherwise the default one is used. eval_distribute Optional tf.distribute.Strategy for evaluation. evaluation_master experimental_max_worker_delay_secs global_id_in_cluster The global id in the training cluster. All global ids in the training cluster are assigned from an increasing sequence of consecutive integers. The first id is 0. Note: Task id (the property field task_id) is tracking the index of the node among all nodes with the SAME task type. For example, given the cluster definition as follows: cluster = {'chief': ['host0:2222'], 'ps': ['host1:2222', 'host2:2222'], 'worker': ['host3:2222', 'host4:2222', 'host5:2222']} Nodes with task type worker can have id 0, 1, 2. Nodes with task type ps can have id, 0, 1. So, task_id is not unique, but the pair (task_type, task_id) can uniquely determine a node in the cluster. Global id, i.e., this field, is tracking the index of the node among ALL nodes in the cluster. It is uniquely assigned. For example, for the cluster spec given above, the global ids are assigned as: task_type | task_id | global_id -------------------------------- chief | 0 | 0 worker | 0 | 1 worker | 1 | 2 worker | 2 | 3 ps | 0 | 4 ps | 1 | 5 is_chief keep_checkpoint_every_n_hours keep_checkpoint_max log_step_count_steps master model_dir num_ps_replicas num_worker_replicas protocol Returns the optional protocol value. save_checkpoints_secs save_checkpoints_steps save_summary_steps service Returns the platform defined (in TF_CONFIG) service dict. session_config session_creation_timeout_secs task_id task_type tf_random_seed tpu_config train_distribute Optional tf.distribute.Strategy for training. Methods replace View source replace( **kwargs ) Returns a new instance of RunConfig replacing specified properties. Only the properties in the following list are allowed to be replaced: model_dir, tf_random_seed, save_summary_steps, save_checkpoints_steps, save_checkpoints_secs, session_config, keep_checkpoint_max, keep_checkpoint_every_n_hours, log_step_count_steps, train_distribute, device_fn, protocol. eval_distribute, experimental_distribute, experimental_max_worker_delay_secs, In addition, either save_checkpoints_steps or save_checkpoints_secs can be set (should not be both). Args **kwargs keyword named properties with new values. Raises ValueError If any property name in kwargs does not exist or is not allowed to be replaced, or both save_checkpoints_steps and save_checkpoints_secs are set. Returns a new instance of RunConfig.
tensorflow.compat.v1.estimator.tpu.runconfig
tf.compat.v1.estimator.tpu.TPUConfig TPU related configuration required by TPUEstimator. tf.compat.v1.estimator.tpu.TPUConfig( iterations_per_loop=2, num_shards=None, num_cores_per_replica=None, per_host_input_for_training=True, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None, eval_training_input_configuration=InputPipelineConfig.PER_HOST_V1, experimental_host_call_every_n_steps=1, experimental_allow_per_host_v2_parallel_get_next=False, experimental_feed_hook=None ) Args iterations_per_loop This is the number of train steps running in TPU system before returning to CPU host for each Session.run. This means global step is increased iterations_per_loop times in one Session.run. It is recommended to be set as number of global steps for next checkpoint. Note that in evaluation don't use this value, instead we run total eval steps on TPU for a single Session.run. [Experimental]: iterations_per_loop can be specified as a time interval. To specify N seconds in one Session.run, one can specify it as Ns and substitute the N with the N with the number of desired seconds. Alternatively, the unit of time can also be specified in minutes or hours, e.g. 3600s or 60m or 1h. num_shards (Deprecated, ignored by TPUEstimator). The number of model replicas in the system. For non-model-parallelism case, this number equals the total number of TPU cores. For model-parallelism, the total number of TPU cores equals num_cores_per_replica * num_shards. num_cores_per_replica Defaults to None, which disables model parallelism. An integer which describes the number of TPU cores per model replica. This is required by model-parallelism which enables partitioning the model to multiple cores. Currently num_cores_per_replica must be 1, 2, 4, or 8. per_host_input_for_training If True, for PER_HOST_V1, the input_fn is invoked once on each host, and the number of hosts must be smaller or equal to the number of replicas. For PER_HOST_V2, the input_fn is invoked once for each host (if the number of hosts is less than the number of replicas) or replica (if the number of replicas is less than the number of hosts. With the per-core input pipeline configuration, it is invoked once for each core. With a global batch size train_batch_size in TPUEstimator constructor, the batch size for each shard is train_batch_size // #hosts in the True or PER_HOST_V1 mode. In PER_HOST_V2 mode, it is train_batch_size // #cores. In BROADCAST mode, input_fn is only invoked once on host 0 and the tensors are broadcasted to all other replicas. The batch size equals to train_batch_size. With the per-core input pipeline configuration, the shard batch size is also train_batch_size // #cores. Note: per_host_input_for_training==PER_SHARD_V1 only supports mode.TRAIN. tpu_job_name The name of the TPU job. Typically, this name is auto-inferred within TPUEstimator, however when using ClusterSpec propagation in more esoteric cluster configurations, you may need to specify the job name as a string. initial_infeed_sleep_secs The number of seconds the infeed thread should wait before enqueueing the first batch. This helps avoid timeouts for models that require a long compilation time. input_partition_dims A nested list to describe the partition dims for all the tensors from input_fn(). The structure of input_partition_dims must match the structure of features and labels from input_fn(). The total number of partitions must match num_cores_per_replica. For example, if input_fn() returns two tensors: images with shape [N, H, W, C] and labels [N]. input_partition_dims = [[1, 2, 2, 1], None] will split the images to 4 pieces and feed into 4 TPU cores. labels tensor are directly broadcasted to all the TPU cores since the partition dims is None. Current limitations: This feature is only supported with the PER_HOST_V2 input mode. eval_training_input_configuration If SLICED, input_fn is only invoked once on host 0 and the tensors are broadcasted to all other replicas. Unlike per_host_input_for_training=BROADCAST, each replica will only get a slice of the data instead of a whole copy. If PER_HOST_V1, the behaviour is determined by per_host_input_for_training. experimental_host_call_every_n_steps Within a training loop, this argument sets how often host calls are performed during training. Host calls will be evaluated every n steps within a training loop where n is the value of this argument. experimental_allow_per_host_v2_parallel_get_next When enabled, allows concurrent execution of dataset get next calls when using PER_HOST_V2 input. May result in a performance increase for models with a small step time, but as a consequence TPUEstimator may non-deterministically distribute batches to different cores, rather than guaranteeing round robin behavior. experimental_feed_hook This is a class which user can provide to the TPU estimator to override the default TPUInfeedOutfeedSessionHook implementation and add customized implementatioin to handle infeed outfeed logic. If given class is None, TPU estimator uses default TPUInfeedOutfeedSessionHook implementation in tpu_estimator.py. If not None, TPU estimator uses this customized tpu infeed outfeed session hook class rather to override the default one. Raises ValueError If num_cores_per_replica is not 1, 2, 4, 8, ..., 128. Attributes iterations_per_loop num_shards num_cores_per_replica per_host_input_for_training tpu_job_name initial_infeed_sleep_secs input_partition_dims eval_training_input_configuration experimental_host_call_every_n_steps experimental_allow_per_host_v2_parallel_get_next experimental_feed_hook
tensorflow.compat.v1.estimator.tpu.tpuconfig
tf.compat.v1.estimator.tpu.TPUEstimator Estimator with TPU support. Inherits From: Estimator tf.compat.v1.estimator.tpu.TPUEstimator( model_fn=None, model_dir=None, config=None, params=None, use_tpu=True, train_batch_size=None, eval_batch_size=None, predict_batch_size=None, batch_axis=None, eval_on_tpu=True, export_to_tpu=True, export_to_cpu=True, warm_start_from=None, embedding_config_spec=None, export_saved_model_api_version=ExportSavedModelApiVersion.V1 ) TPUEstimator also supports training on CPU and GPU. You don't need to define a separate tf.estimator.Estimator. TPUEstimator handles many of the details of running on TPU devices, such as replicating inputs and models for each core, and returning to host periodically to run hooks. TPUEstimator transforms a global batch size in params to a per-shard batch size when calling the input_fn and model_fn. Users should specify global batch size in constructor, and then get the batch size for each shard in input_fn and model_fn by params['batch_size']. For training, model_fn gets per-core batch size; input_fn may get per-core or per-host batch size depending on per_host_input_for_training in TPUConfig (See docstring for TPUConfig for details). For evaluation and prediction, model_fn gets per-core batch size and input_fn get per-host batch size. Evaluation model_fn should return TPUEstimatorSpec, which expects the eval_metrics for TPU evaluation. If eval_on_tpu is False, the evaluation will execute on CPU or GPU; in this case the following discussion on TPU evaluation does not apply. TPUEstimatorSpec.eval_metrics is a tuple of metric_fn and tensors, where tensors could be a list of any nested structure of Tensors (See TPUEstimatorSpec for details). metric_fn takes the tensors and returns a dict from metric string name to the result of calling a metric function, namely a (metric_tensor, update_op) tuple. One can set use_tpu to False for testing. All training, evaluation, and predict will be executed on CPU. input_fn and model_fn will receive train_batch_size or eval_batch_size unmodified as params['batch_size']. Current limitations: TPU evaluation only works on a single host (one TPU worker) except BROADCAST mode. input_fn for evaluation should NOT raise an end-of-input exception (OutOfRangeError or StopIteration). And all evaluation steps and all batches should have the same size. Example (MNIST): # The metric Fn which runs on CPU. def metric_fn(labels, logits): predictions = tf.argmax(logits, 1) return { 'accuracy': tf.compat.v1.metrics.precision( labels=labels, predictions=predictions), } # Your model Fn which runs on TPU (eval_metrics is list in this example) def model_fn(features, labels, mode, config, params): ... logits = ... if mode = tf.estimator.ModeKeys.EVAL: return tpu_estimator.TPUEstimatorSpec( mode=mode, loss=loss, eval_metrics=(metric_fn, [labels, logits])) # or specify the eval_metrics tensors as dict. def model_fn(features, labels, mode, config, params): ... final_layer_output = ... if mode = tf.estimator.ModeKeys.EVAL: return tpu_estimator.TPUEstimatorSpec( mode=mode, loss=loss, eval_metrics=(metric_fn, { 'labels': labels, 'logits': final_layer_output, })) Prediction Prediction on TPU is an experimental feature to support large batch inference. It is not designed for latency-critical system. In addition, due to some usability issues, for prediction with small dataset, CPU .predict, i.e., creating a new TPUEstimator instance with use_tpu=False, might be more convenient. Note: In contrast to TPU training/evaluation, the input_fn for prediction should raise an end-of-input exception (OutOfRangeError or StopIteration), which serves as the stopping signal to TPUEstimator. To be precise, the ops created by input_fn produce one batch of the data. The predict() API processes one batch at a time. When reaching the end of the data source, an end-of-input exception should be raised by one of these operations. The user usually does not need to do this manually. As long as the dataset is not repeated forever, the tf.data API will raise an end-of-input exception automatically after the last batch has been produced. Note: Estimator.predict returns a Python generator. Please consume all the data from the generator so that TPUEstimator can shutdown the TPU system properly for user. Current limitations: TPU prediction only works on a single host (one TPU worker). input_fn must return a Dataset instance rather than features. In fact, .train() and .evaluate() also support Dataset as return value. Example (MNIST): height = 32 width = 32 total_examples = 100 def predict_input_fn(params): batch_size = params['batch_size'] images = tf.random.uniform( [total_examples, height, width, 3], minval=-1, maxval=1) dataset = tf.data.Dataset.from_tensor_slices(images) dataset = dataset.map(lambda images: {'image': images}) dataset = dataset.batch(batch_size) return dataset def model_fn(features, labels, params, mode): # Generate predictions, called 'output', from features['image'] if mode == tf.estimator.ModeKeys.PREDICT: return tf.contrib.tpu.TPUEstimatorSpec( mode=mode, predictions={ 'predictions': output, 'is_padding': features['is_padding'] }) tpu_est = TPUEstimator( model_fn=model_fn, ..., predict_batch_size=16) # Fully consume the generator so that TPUEstimator can shutdown the TPU # system. for item in tpu_est.predict(input_fn=input_fn): # Filter out item if the `is_padding` is 1. # Process the 'predictions' Exporting export_saved_model exports 2 metagraphs, one with saved_model.SERVING, and another with saved_model.SERVING and saved_model.TPU tags. At serving time, these tags are used to select the appropriate metagraph to load. Before running the graph on TPU, the TPU system needs to be initialized. If TensorFlow Serving model-server is used, this is done automatically. If not, please use session.run(tpu.initialize_system()). There are two versions of the API: 1 or 2. In V1, the exported CPU graph is model_fn as it is. The exported TPU graph wraps tpu.rewrite() and TPUPartitionedCallOp around model_fn so model_fn is on TPU by default. To place ops on CPU, tpu.outside_compilation(host_call, logits) can be used. Example: def model_fn(features, labels, mode, config, params): ... logits = ... export_outputs = { 'logits': export_output_lib.PredictOutput( {'logits': logits}) } def host_call(logits): class_ids = math_ops.argmax(logits) classes = string_ops.as_string(class_ids) export_outputs['classes'] = export_output_lib.ClassificationOutput(classes=classes) tpu.outside_compilation(host_call, logits) ... In V2, export_saved_model() sets up params['use_tpu'] flag to let the user know if the code is exporting to TPU (or not). When params['use_tpu'] is True, users need to call tpu.rewrite(), TPUPartitionedCallOp and/or batch_function(). Alternatively use inference_on_tpu() which is a convenience wrapper of the three. def model_fn(features, labels, mode, config, params): ... # This could be some pre-processing on CPU like calls to input layer with # embedding columns. x2 = features['x'] * 2 def computation(input_tensor): return layers.dense( input_tensor, 1, kernel_initializer=init_ops.zeros_initializer()) inputs = [x2] if params['use_tpu']: predictions = array_ops.identity( tpu_estimator.inference_on_tpu(computation, inputs, num_batch_threads=1, max_batch_size=2, batch_timeout_micros=100), name='predictions') else: predictions = array_ops.identity( computation(*inputs), name='predictions') key = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY export_outputs = { key: export_lib.PredictOutput({'prediction': predictions}) } ... TIP: V2 is recommended as it is more flexible (eg: batching, etc). Args model_fn Model function as required by Estimator which returns EstimatorSpec or TPUEstimatorSpec. training_hooks, 'evaluation_hooks', and prediction_hooks must not capure any TPU Tensor inside the model_fn. model_dir Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model. If None, the model_dir in config will be used if set. If both are set, they must be same. If both are None, a temporary directory will be used. config An tpu_config.RunConfig configuration object. Cannot be None. params An optional dict of hyper parameters that will be passed into input_fn and model_fn. Keys are names of parameters, values are basic python types. There are reserved keys for TPUEstimator, including 'batch_size'. use_tpu A bool indicating whether TPU support is enabled. Currently, - TPU training and evaluation respect this bit, but eval_on_tpu can override execution of eval. See below. train_batch_size An int representing the global training batch size. TPUEstimator transforms this global batch size to a per-shard batch size, as params['batch_size'], when calling input_fn and model_fn. Cannot be None if use_tpu is True. Must be divisible by total number of replicas. eval_batch_size An int representing evaluation batch size. Must be divisible by total number of replicas. predict_batch_size An int representing the prediction batch size. Must be divisible by total number of replicas. batch_axis A python tuple of int values describing how each tensor produced by the Estimator input_fn should be split across the TPU compute shards. For example, if your input_fn produced (images, labels) where the images tensor is in HWCN format, your shard dimensions would be [3, 0], where 3 corresponds to the N dimension of your images Tensor, and 0 corresponds to the dimension along which to split the labels to match up with the corresponding images. If None is supplied, and per_host_input_for_training is True, batches will be sharded based on the major dimension. If tpu_config.per_host_input_for_training is False or PER_HOST_V2, batch_axis is ignored. eval_on_tpu If False, evaluation runs on CPU or GPU. In this case, the model_fn must return EstimatorSpec when called with mode as EVAL. export_to_tpu If True, export_saved_model() exports a metagraph for serving on TPU. Note that unsupported export modes such as EVAL will be ignored. For those modes, only a CPU model will be exported. Currently, export_to_tpu only supports PREDICT. export_to_cpu If True, export_saved_model() exports a metagraph for serving on CPU. warm_start_from Optional string filepath to a checkpoint or SavedModel to warm-start from, or a tf.estimator.WarmStartSettings object to fully configure warm-starting. If the string filepath is provided instead of a WarmStartSettings, then all variables are warm-started, and it is assumed that vocabularies and Tensor names are unchanged. embedding_config_spec Optional EmbeddingConfigSpec instance to support using TPU embedding. export_saved_model_api_version an integer: 1 or 2. 1 corresponds to V1, 2 corresponds to V2. (Defaults to V1). With V1, export_saved_model() adds rewrite() and TPUPartitionedCallOp() for user; while in v2, user is expected to add rewrite(), TPUPartitionedCallOp() etc in their model_fn. A helper function inference_on_tpu is provided for V2. brn_tpu_estimator.py includes examples for both versions i.e. TPUEstimatorExportTest and TPUEstimatorExportV2Test. Raises ValueError params has reserved keys already. Attributes config model_dir model_fn Returns the model_fn which is bound to self.params. params Methods eval_dir View source eval_dir( name=None ) Shows the directory name where evaluation metrics are dumped. Args name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A string which is the path of directory contains evaluation metrics. evaluate View source evaluate( input_fn, steps=None, hooks=None, checkpoint_path=None, name=None ) Evaluates the model given evaluation data input_fn. For each step, calls input_fn, which returns one batch of data. Evaluates until: steps batches are processed, or input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). Args input_fn A function that constructs the input data for evaluation. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. steps Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the evaluation call. checkpoint_path Path of a specific checkpoint to evaluate. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, evaluation is run with newly initialized Variables instead of ones restored from checkpoint. name Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. Returns A dict containing the evaluation metrics specified in model_fn keyed by name, as well as an entry global_step which contains the value of the global step for which this evaluation was performed. For canned estimators, the dict contains the loss (mean loss per mini-batch) and the average_loss (mean loss per sample). Canned classifiers also return the accuracy. Canned regressors also return the label/mean and the prediction/mean. Raises ValueError If steps <= 0. experimental_export_all_saved_models View source experimental_export_all_saved_models( export_dir_base, input_receiver_fn_map, assets_extra=None, as_text=False, checkpoint_path=None ) Exports a SavedModel with tf.MetaGraphDefs for each requested mode. For each mode passed in via the input_receiver_fn_map, this method builds a new graph by calling the input_receiver_fn to obtain feature and label Tensors. Next, this method calls the Estimator's model_fn in the passed mode to generate the model graph based on those features and labels, and restores the given checkpoint (or, lacking that, the most recent checkpoint) into the graph. Only one of the modes is used for saving variables to the SavedModel (order of preference: tf.estimator.ModeKeys.TRAIN, tf.estimator.ModeKeys.EVAL, then tf.estimator.ModeKeys.PREDICT), such that up to three tf.MetaGraphDefs are saved with a single set of variables in a single SavedModel directory. For the variables and tf.MetaGraphDefs, a timestamped export directory below export_dir_base, and writes a SavedModel into it containing the tf.MetaGraphDef for the given mode and its associated signatures. For prediction, the exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. For training and evaluation, the train_op is stored in an extra collection, and loss, metrics, and predictions are included in a SignatureDef for the mode in question. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. input_receiver_fn_map dict of tf.estimator.ModeKeys to input_receiver_fn mappings, where the input_receiver_fn is a function that takes no arguments and returns the appropriate subclass of InputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. Returns The path to the exported directory as a bytes object. Raises ValueError if any input_receiver_fn is None, no export_outputs are provided, or no checkpoint can be found. export_saved_model View source export_saved_model( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, experimental_mode=ModeKeys.PREDICT ) Exports inference graph as a SavedModel into the given dir. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. The experimental_mode parameter can be used to export a single train/eval/predict graph as a SavedModel. See experimental_export_all_saved_models for full docs. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. experimental_mode tf.estimator.ModeKeys value indicating with mode will be exported. Note that this feature is experimental. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. export_savedmodel View source export_savedmodel( export_dir_base, serving_input_receiver_fn, assets_extra=None, as_text=False, checkpoint_path=None, strip_default_attrs=False ) Exports inference graph as a SavedModel into the given dir. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: This function has been renamed, use export_saved_model instead. For a detailed guide, see SavedModel from Estimators. This method builds a new graph by first calling the serving_input_receiver_fn to obtain feature Tensors, and then calling this Estimator's model_fn to generate the model graph based on those features. It restores the given checkpoint (or, lacking that, the most recent checkpoint) into this graph in a fresh session. Finally it creates a timestamped export directory below the given export_dir_base, and writes a SavedModel into it containing a single tf.MetaGraphDef saved from this session. The exported MetaGraphDef will provide one SignatureDef for each element of the export_outputs dict returned from the model_fn, named using the same keys. One of these keys is always tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY, indicating which signature will be served when a serving request does not specify one. For each signature, the outputs are provided by the corresponding tf.estimator.export.ExportOutputs, and the inputs are always the input receivers provided by the serving_input_receiver_fn. Extra assets may be written into the SavedModel via the assets_extra argument. This should be a dict, where each key gives a destination path (including the filename) relative to the assets.extra directory. The corresponding value gives the full path of the source file to be copied. For example, the simple case of copying a single file without renaming it is specified as {'my_asset_file.txt': '/path/to/my_asset_file.txt'}. Args export_dir_base A string containing a directory in which to create timestamped subdirectories containing exported SavedModels. serving_input_receiver_fn A function that takes no argument and returns a tf.estimator.export.ServingInputReceiver or tf.estimator.export.TensorServingInputReceiver. assets_extra A dict specifying how to populate the assets.extra directory within the exported SavedModel, or None if no extra assets are needed. as_text whether to write the SavedModel proto in text format. checkpoint_path The checkpoint path to export. If None (the default), the most recent checkpoint found within the model directory is chosen. strip_default_attrs Boolean. If True, default-valued attributes will be removed from the NodeDefs. For a detailed guide, see Stripping Default-Valued Attributes. Returns The path to the exported directory as a bytes object. Raises ValueError if no serving_input_receiver_fn is provided, no export_outputs are provided, or no checkpoint can be found. get_variable_names View source get_variable_names() Returns list of all variable names in this model. Returns List of names. Raises ValueError If the Estimator has not produced a checkpoint yet. get_variable_value View source get_variable_value( name ) Returns value of the variable given by name. Args name string or a list of string, name of the tensor. Returns Numpy array - value of the tensor. Raises ValueError If the Estimator has not produced a checkpoint yet. latest_checkpoint View source latest_checkpoint() Finds the filename of the latest saved checkpoint file in model_dir. Returns The full path to the latest checkpoint or None if no checkpoint was found. predict View source predict( input_fn, predict_keys=None, hooks=None, checkpoint_path=None, yield_single_examples=True ) Yields predictions for given features. Please note that interleaving two predict outputs does not work. See: issue/20506 Args input_fn A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following: tf.data.Dataset object -- Outputs of Dataset object must have same constraints as below. features -- A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. A tuple, in which case the first item is extracted as features. predict_keys list of str, name of the keys to predict. It is used if the tf.estimator.EstimatorSpec.predictions is a dict. If predict_keys is used then rest of the predictions will be filtered from the dictionary. If None, returns all. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the prediction call. checkpoint_path Path of a specific checkpoint to predict. If None, the latest checkpoint in model_dir is used. If there are no checkpoints in model_dir, prediction is run with newly initialized Variables instead of ones restored from checkpoint. yield_single_examples If False, yields the whole batch as returned by the model_fn instead of decomposing the batch into individual elements. This is useful if model_fn returns some tensors whose first dimension is not equal to the batch size. Yields Evaluated values of predictions tensors. Raises ValueError If batch length of predictions is not the same and yield_single_examples is True. ValueError If there is a conflict between predict_keys and predictions. For example if predict_keys is not None but tf.estimator.EstimatorSpec.predictions is not a dict. train View source train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None ) Trains a model given training data input_fn. Args input_fn A function that provides input data for training as minibatches. See Premade Estimators for more information. The function should construct and return one of the following: A tf.data.Dataset object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below. A tuple (features, labels): Where features is a tf.Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs. hooks List of tf.train.SessionRunHook subclass instances. Used for callbacks inside the training loop. steps Number of steps for which to train the model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. steps works incrementally. If you call two times train(steps=10) then training occurs in total 20 steps. If OutOfRange or StopIteration occurs in the middle, training stops before 20 steps. If you don't want to have incremental behavior please set max_steps instead. If set, max_steps must be None. max_steps Number of total steps for which to train model. If None, train forever or train until input_fn generates the tf.errors.OutOfRange error or StopIteration exception. If set, steps must be None. If OutOfRange or StopIteration occurs in the middle, training stops before max_steps steps. Two calls to train(steps=100) means 200 training iterations. On the other hand, two calls to train(max_steps=100) means that the second call will not do any iteration since first call did all 100 steps. saving_listeners list of CheckpointSaverListener objects. Used for callbacks that run immediately before or after checkpoint savings. Returns self, for chaining. Raises ValueError If both steps and max_steps are not None. ValueError If either steps or max_steps <= 0.
tensorflow.compat.v1.estimator.tpu.tpuestimator
tf.compat.v1.estimator.tpu.TPUEstimatorSpec Ops and objects returned from a model_fn and passed to TPUEstimator. tf.compat.v1.estimator.tpu.TPUEstimatorSpec( mode, predictions=None, loss=None, train_op=None, eval_metrics=None, export_outputs=None, scaffold_fn=None, host_call=None, training_hooks=None, evaluation_hooks=None, prediction_hooks=None ) See EstimatorSpec for mode, predictions, loss, train_op, and export_outputs. For evaluation, eval_metricsis a tuple of metric_fn and tensors, where metric_fn runs on CPU to generate metrics and tensors represents the Tensors transferred from TPU system to CPU host and passed to metric_fn. To be precise, TPU evaluation expects a slightly different signature from the tf.estimator.Estimator. While EstimatorSpec.eval_metric_ops expects a dict, TPUEstimatorSpec.eval_metrics is a tuple of metric_fn and tensors. The tensors could be a list of Tensors or dict of names to Tensors. The tensors usually specify the model logits, which are transferred back from TPU system to CPU host. All tensors must have be batch-major, i.e., the batch size is the first dimension. Once all tensors are available at CPU host from all shards, they are concatenated (on CPU) and passed as positional arguments to the metric_fn if tensors is list or keyword arguments if tensors is a dict. metric_fn takes the tensors and returns a dict from metric string name to the result of calling a metric function, namely a (metric_tensor, update_op) tuple. See TPUEstimator for MNIST example how to specify the eval_metrics. scaffold_fn is a function running on CPU to generate the Scaffold. This function should not capture any Tensors in model_fn. host_call is a tuple of a function and a list or dictionary of tensors to pass to that function and returns a list of Tensors. host_call currently works for train() and evaluate(). The Tensors returned by the function is executed on the CPU on every step, so there is communication overhead when sending tensors from TPU to CPU. To reduce the overhead, try reducing the size of the tensors. The tensors are concatenated along their major (batch) dimension, and so must be >= rank 1. The host_call is useful for writing summaries with tf.contrib.summary.create_file_writer. Attributes mode predictions loss train_op eval_metrics export_outputs scaffold_fn host_call training_hooks evaluation_hooks prediction_hooks Methods as_estimator_spec View source as_estimator_spec() Creates an equivalent EstimatorSpec used by CPU train/eval.
tensorflow.compat.v1.estimator.tpu.tpuestimatorspec
tf.compat.v1.Event A ProtocolMessage View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.summary.Event Attributes file_version string file_version graph_def bytes graph_def log_message LogMessage log_message meta_graph_def bytes meta_graph_def session_log SessionLog session_log step int64 step summary Summary summary tagged_run_metadata TaggedRunMetadata tagged_run_metadata wall_time double wall_time
tensorflow.compat.v1.event
tf.compat.v1.executing_eagerly Checks whether the current thread has eager execution enabled. tf.compat.v1.executing_eagerly() Eager execution is typically enabled via tf.compat.v1.enable_eager_execution, but may also be enabled within the context of a Python function via tf.contrib.eager.py_func. When eager execution is enabled, returns True in most cases. However, this API might return False in the following use cases. Executing inside tf.function, unless under tf.init_scope or tf.config.run_functions_eagerly(True) is previously called. Executing inside a transformation function for tf.dataset. tf.compat.v1.disable_eager_execution() is called. tf.compat.v1.enable_eager_execution() General case: print(tf.executing_eagerly()) True Inside tf.function: @tf.function def fn(): with tf.init_scope(): print(tf.executing_eagerly()) print(tf.executing_eagerly()) fn() True False Inside tf.function after tf.config.run_functions_eagerly(True) is called: tf.config.run_functions_eagerly(True) @tf.function def fn(): with tf.init_scope(): print(tf.executing_eagerly()) print(tf.executing_eagerly()) fn() True True tf.config.run_functions_eagerly(False) Inside a transformation function for tf.dataset: def data_fn(x): print(tf.executing_eagerly()) return x dataset = tf.data.Dataset.range(100) dataset = dataset.map(data_fn) False Returns True if the current thread has eager execution enabled.
tensorflow.compat.v1.executing_eagerly
tf.compat.v1.executing_eagerly_outside_functions Returns True if executing eagerly, even if inside a graph function. tf.compat.v1.executing_eagerly_outside_functions() This function will check the outermost context for the program and see if it is in eager mode. It is useful comparing to tf.executing_eagerly(), which checks the current context and will return False within a tf.function body. It can be used to build library that behave differently in eager runtime and v1 session runtime (deprecated). Example: tf.compat.v1.enable_eager_execution() @tf.function def func(): # A function constructs TensorFlow graphs, it does not execute eagerly, # but the outer most context is still eager. assert not tf.executing_eagerly() return tf.compat.v1.executing_eagerly_outside_functions() func() <tf.Tensor: shape=(), dtype=bool, numpy=True> Returns boolean, whether the outermost context is in eager mode.
tensorflow.compat.v1.executing_eagerly_outside_functions
tf.compat.v1.expand_dims Returns a tensor with a length 1 axis inserted at index axis. (deprecated arguments) tf.compat.v1.expand_dims( input, axis=None, name=None, dim=None ) Warning: SOME ARGUMENTS ARE DEPRECATED: (dim). They will be removed in a future version. Instructions for updating: Use the axis argument instead Given a tensor input, this operation inserts a dimension of length 1 at the dimension index axis of input's shape. The dimension index follows Python indexing rules: It's zero-based, a negative index it is counted backward from the end. This operation is useful to: Add an outer "batch" dimension to a single element. Align axes for broadcasting. To add an inner vector length axis to a tensor of scalars. For example: If you have a single image of shape [height, width, channels]: image = tf.zeros([10,10,3]) You can add an outer batch axis by passing axis=0: tf.expand_dims(image, axis=0).shape.as_list() [1, 10, 10, 3] The new axis location matches Python list.insert(axis, 1): tf.expand_dims(image, axis=1).shape.as_list() [10, 1, 10, 3] Following standard Python indexing rules, a negative axis counts from the end so axis=-1 adds an inner most dimension: tf.expand_dims(image, -1).shape.as_list() [10, 10, 3, 1] This operation requires that axis is a valid index for input.shape, following Python indexing rules: -1-tf.rank(input) <= axis <= tf.rank(input) This operation is related to: tf.squeeze, which removes dimensions of size 1. tf.reshape, which provides more flexible reshaping capability. tf.sparse.expand_dims, which provides this functionality for tf.SparseTensor Args input A Tensor. axis 0-D (scalar). Specifies the dimension index at which to expand the shape of input. Must be in the range [-rank(input) - 1, rank(input)]. name The name of the output Tensor (optional). dim 0-D (scalar). Equivalent to axis, to be deprecated. Returns A Tensor with the same data as input, but its shape has an additional dimension of size 1 added. Raises ValueError if either both or neither of dim and axis are specified.
tensorflow.compat.v1.expand_dims
Module: tf.compat.v1.experimental Public API for tf.experimental namespace. Classes class Optional: Represents a value that may or may not be present. Functions async_clear_error(...): Clear pending operations and error statuses in async execution. async_scope(...): Context manager for grouping async operations. function_executor_type(...): Context manager for setting the executor of eager defined functions. output_all_intermediates(...): Whether to output all intermediates from functional control flow ops. register_filesystem_plugin(...): Loads a TensorFlow FileSystem plugin.
tensorflow.compat.v1.experimental
tf.compat.v1.experimental.output_all_intermediates Whether to output all intermediates from functional control flow ops. tf.compat.v1.experimental.output_all_intermediates( state ) The "default" behavior to is to output all intermediates when using v2 control flow inside Keras models in graph mode (possibly inside Estimators). This is needed to support taking gradients of v2 control flow. In graph mode, Keras can sometimes freeze the forward graph before the gradient computation which does not work for v2 control flow since it requires updating the forward ops to output the needed intermediates. We work around this by proactively outputting the needed intermediates when building the forward pass itself. Ideally any such extra tensors should be pruned out at runtime. However, if for any reason this doesn't work for you or if you have an inference-only model you can turn this behavior off using tf.compat.v1.experimental.output_all_intermediates(False). If with the default behavior you are still seeing errors of the form "Connecting to invalid output X of source node Y which has Z outputs" try setting tf.compat.v1.experimental.output_all_intermediates(True) and please file an issue at https://github.com/tensorflow/tensorflow/issues. Args state True, False or None. None restores the default behavior.
tensorflow.compat.v1.experimental.output_all_intermediates
tf.compat.v1.extract_image_patches Extract patches from images and put them in the "depth" output dimension. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.image.extract_image_patches tf.compat.v1.extract_image_patches( images, ksizes=None, strides=None, rates=None, padding=None, name=None, sizes=None ) Args images A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, uint8, uint16, uint32, uint64, complex64, complex128, bool. 4-D Tensor with shape [batch, in_rows, in_cols, depth]. ksizes A list of ints that has length >= 4. The size of the sliding window for each dimension of images. strides A list of ints that has length >= 4. How far the centers of two consecutive patches are in the images. Must be: [1, stride_rows, stride_cols, 1]. rates A list of ints that has length >= 4. Must be: [1, rate_rows, rate_cols, 1]. This is the input stride, specifying how far two consecutive patch samples are in the input. Equivalent to extracting patches with patch_sizes_eff = patch_sizes + (patch_sizes - 1) * (rates - 1), followed by subsampling them spatially by a factor of rates. This is equivalent to rate in dilated (a.k.a. Atrous) convolutions. padding A string from: "SAME", "VALID". The type of padding algorithm to use. name A name for the operation (optional). Returns A Tensor. Has the same type as images.
tensorflow.compat.v1.extract_image_patches
Module: tf.compat.v1.feature_column Public API for tf.feature_column namespace. Functions bucketized_column(...): Represents discretized dense input bucketed by boundaries. categorical_column_with_hash_bucket(...): Represents sparse feature where ids are set by hashing. categorical_column_with_identity(...): A CategoricalColumn that returns identity values. categorical_column_with_vocabulary_file(...): A CategoricalColumn with a vocabulary file. categorical_column_with_vocabulary_list(...): A CategoricalColumn with in-memory vocabulary. crossed_column(...): Returns a column for performing crosses of categorical features. embedding_column(...): DenseColumn that converts from sparse, categorical input. indicator_column(...): Represents multi-hot representation of given categorical column. input_layer(...): Returns a dense Tensor as input layer based on given feature_columns. linear_model(...): Returns a linear prediction Tensor based on given feature_columns. make_parse_example_spec(...): Creates parsing spec dictionary from input feature_columns. numeric_column(...): Represents real valued or numerical features. sequence_categorical_column_with_hash_bucket(...): A sequence of categorical terms where ids are set by hashing. sequence_categorical_column_with_identity(...): Returns a feature column that represents sequences of integers. sequence_categorical_column_with_vocabulary_file(...): A sequence of categorical terms where ids use a vocabulary file. sequence_categorical_column_with_vocabulary_list(...): A sequence of categorical terms where ids use an in-memory list. sequence_numeric_column(...): Returns a feature column that represents sequences of numeric data. shared_embedding_columns(...): List of dense columns that convert from sparse, categorical input. weighted_categorical_column(...): Applies weight values to a CategoricalColumn.
tensorflow.compat.v1.feature_column
tf.compat.v1.feature_column.categorical_column_with_vocabulary_file A CategoricalColumn with a vocabulary file. tf.compat.v1.feature_column.categorical_column_with_vocabulary_file( key, vocabulary_file, vocabulary_size=None, num_oov_buckets=0, default_value=None, dtype=tf.dtypes.string ) Use this when your inputs are in string or integer format, and you have a vocabulary file that maps each value to an integer ID. By default, out-of-vocabulary values are ignored. Use either (but not both) of num_oov_buckets and default_value to specify how to include out-of-vocabulary values. For input dictionary features, features[key] is either Tensor or SparseTensor. If Tensor, missing values can be represented by -1 for int and '' for string, which will be dropped by this feature column. Example with num_oov_buckets: File '/us/states.txt' contains 50 lines, each with a 2-character U.S. state abbreviation. All inputs with values in that file are assigned an ID 0-49, corresponding to its line number. All other values are hashed and assigned an ID 50-54. states = categorical_column_with_vocabulary_file( key='states', vocabulary_file='/us/states.txt', vocabulary_size=50, num_oov_buckets=5) columns = [states, ...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction = linear_model(features, columns) Example with default_value: File '/us/states.txt' contains 51 lines - the first line is 'XX', and the other 50 each have a 2-character U.S. state abbreviation. Both a literal 'XX' in input, and other values missing from the file, will be assigned ID 0. All others are assigned the corresponding line number 1-50. states = categorical_column_with_vocabulary_file( key='states', vocabulary_file='/us/states.txt', vocabulary_size=51, default_value=0) columns = [states, ...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) linear_prediction, _, _ = linear_model(features, columns) And to make an embedding with either: columns = [embedding_column(states, 3),...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns) Args key A unique string identifying the input feature. It is used as the column name and the dictionary key for feature parsing configs, feature Tensor objects, and feature columns. vocabulary_file The vocabulary file name. vocabulary_size Number of the elements in the vocabulary. This must be no greater than length of vocabulary_file, if less than length, later values are ignored. If None, it is set to the length of vocabulary_file. num_oov_buckets Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range [vocabulary_size, vocabulary_size+num_oov_buckets) based on a hash of the input value. A positive num_oov_buckets can not be specified with default_value. default_value The integer ID value to return for out-of-vocabulary feature values, defaults to -1. This can not be specified with a positive num_oov_buckets. dtype The type of features. Only string and integer types are supported. Returns A CategoricalColumn with a vocabulary file. Raises ValueError vocabulary_file is missing or cannot be opened. ValueError vocabulary_size is missing or < 1. ValueError num_oov_buckets is a negative integer. ValueError num_oov_buckets and default_value are both specified. ValueError dtype is neither string nor integer.
tensorflow.compat.v1.feature_column.categorical_column_with_vocabulary_file
tf.compat.v1.feature_column.input_layer Returns a dense Tensor as input layer based on given feature_columns. tf.compat.v1.feature_column.input_layer( features, feature_columns, weight_collections=None, trainable=True, cols_to_vars=None, cols_to_output_tensors=None ) Generally a single example in training data is described with FeatureColumns. At the first layer of the model, this column oriented data should be converted to a single Tensor. Example: price = numeric_column('price') keywords_embedded = embedding_column( categorical_column_with_hash_bucket("keywords", 10K), dimensions=16) columns = [price, keywords_embedded, ...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) dense_tensor = input_layer(features, columns) for units in [128, 64, 32]: dense_tensor = tf.compat.v1.layers.dense(dense_tensor, units, tf.nn.relu) prediction = tf.compat.v1.layers.dense(dense_tensor, 1) Args features A mapping from key to tensors. _FeatureColumns look up via these keys. For example numeric_column('price') will look at 'price' key in this dict. Values can be a SparseTensor or a Tensor depends on corresponding _FeatureColumn. feature_columns An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from _DenseColumn such as numeric_column, embedding_column, bucketized_column, indicator_column. If you have categorical features, you can wrap them with an embedding_column or indicator_column. weight_collections A list of collection names to which the Variable will be added. Note that variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and ops.GraphKeys.MODEL_VARIABLES. trainable If True also add the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). cols_to_vars If not None, must be a dictionary that will be filled with a mapping from _FeatureColumn to list of Variables. For example, after the call, we might have cols_to_vars = {_EmbeddingColumn( categorical_column=_HashedCategoricalColumn( key='sparse_feature', hash_bucket_size=5, dtype=tf.string), dimension=10): [ cols_to_output_tensors If not None, must be a dictionary that will be filled with a mapping from '_FeatureColumn' to the associated output Tensors. Returns A Tensor which represents input layer of a model. Its shape is (batch_size, first_layer_dimension) and its dtype is float32. first_layer_dimension is determined based on given feature_columns. Raises ValueError if an item in feature_columns is not a _DenseColumn.
tensorflow.compat.v1.feature_column.input_layer
tf.compat.v1.feature_column.linear_model Returns a linear prediction Tensor based on given feature_columns. tf.compat.v1.feature_column.linear_model( features, feature_columns, units=1, sparse_combiner='sum', weight_collections=None, trainable=True, cols_to_vars=None ) This function generates a weighted sum based on output dimension units. Weighted sum refers to logits in classification problems. It refers to the prediction itself for linear regression problems. Note on supported columns: linear_model treats categorical columns as indicator_columns. To be specific, assume the input as SparseTensor looks like: shape = [2, 2] { [0, 0]: "a" [1, 0]: "b" [1, 1]: "c" } linear_model assigns weights for the presence of "a", "b", "c' implicitly, just like indicator_column, while input_layer explicitly requires wrapping each of categorical columns with an embedding_column or an indicator_column. Example of usage: price = numeric_column('price') price_buckets = bucketized_column(price, boundaries=[0., 10., 100., 1000.]) keywords = categorical_column_with_hash_bucket("keywords", 10K) keywords_price = crossed_column('keywords', price_buckets, ...) columns = [price_buckets, keywords, keywords_price ...] features = tf.io.parse_example(..., features=make_parse_example_spec(columns)) prediction = linear_model(features, columns) The sparse_combiner argument works as follows For example, for two features represented as the categorical columns: # Feature 1 shape = [2, 2] { [0, 0]: "a" [0, 1]: "b" [1, 0]: "c" } # Feature 2 shape = [2, 3] { [0, 0]: "d" [1, 0]: "e" [1, 1]: "f" [1, 2]: "f" } with sparse_combiner as "mean", the linear model outputs consequently are: y_0 = 1.0 / 2.0 * ( w_a + w_b ) + w_d + b y_1 = w_c + 1.0 / 3.0 * ( w_e + 2.0 * w_f ) + b where y_i is the output, b is the bias, and w_x is the weight assigned to the presence of x in the input features. Args features A mapping from key to tensors. _FeatureColumns look up via these keys. For example numeric_column('price') will look at 'price' key in this dict. Values are Tensor or SparseTensor depending on corresponding _FeatureColumn. feature_columns An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from _FeatureColumns. units An integer, dimensionality of the output space. Default value is 1. sparse_combiner A string specifying how to reduce if a categorical column is multivalent. Except numeric_column, almost all columns passed to linear_model are considered as categorical columns. It combines each categorical column independently. Currently "mean", "sqrtn" and "sum" are supported, with "sum" the default for linear model. "sqrtn" often achieves good accuracy, in particular with bag-of-words columns. "sum": do not normalize features in the column "mean": do l1 normalization on features in the column "sqrtn": do l2 normalization on features in the column weight_collections A list of collection names to which the Variable will be added. Note that, variables will also be added to collections tf.GraphKeys.GLOBAL_VARIABLES and ops.GraphKeys.MODEL_VARIABLES. trainable If True also add the variable to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). cols_to_vars If not None, must be a dictionary that will be filled with a mapping from _FeatureColumn to associated list of Variables. For example, after the call, we might have cols_to_vars = { _NumericColumn( key='numeric_feature1', shape=(1,): [], 'bias': [], _NumericColumn( key='numeric_feature2', shape=(2,)): []} If a column creates no variables, its value will be an empty list. Note that cols_to_vars will also contain a string key 'bias' that maps to a list of Variables. Returns A Tensor which represents predictions/logits of a linear model. Its shape is (batch_size, units) and its dtype is float32. Raises ValueError if an item in feature_columns is neither a _DenseColumn nor _CategoricalColumn.
tensorflow.compat.v1.feature_column.linear_model
tf.compat.v1.feature_column.make_parse_example_spec Creates parsing spec dictionary from input feature_columns. tf.compat.v1.feature_column.make_parse_example_spec( feature_columns ) The returned dictionary can be used as arg 'features' in tf.io.parse_example. Typical usage example: # Define features and transformations feature_a = categorical_column_with_vocabulary_file(...) feature_b = numeric_column(...) feature_c_bucketized = bucketized_column(numeric_column("feature_c"), ...) feature_a_x_feature_c = crossed_column( columns=["feature_a", feature_c_bucketized], ...) feature_columns = set( [feature_b, feature_c_bucketized, feature_a_x_feature_c]) features = tf.io.parse_example( serialized=serialized_examples, features=make_parse_example_spec(feature_columns)) For the above example, make_parse_example_spec would return the dict: { "feature_a": parsing_ops.VarLenFeature(tf.string), "feature_b": parsing_ops.FixedLenFeature([1], dtype=tf.float32), "feature_c": parsing_ops.FixedLenFeature([1], dtype=tf.float32) } Args feature_columns An iterable containing all feature columns. All items should be instances of classes derived from _FeatureColumn. Returns A dict mapping each feature key to a FixedLenFeature or VarLenFeature value. Raises ValueError If any of the given feature_columns is not a _FeatureColumn instance.
tensorflow.compat.v1.feature_column.make_parse_example_spec
tf.compat.v1.feature_column.shared_embedding_columns List of dense columns that convert from sparse, categorical input. tf.compat.v1.feature_column.shared_embedding_columns( categorical_columns, dimension, combiner='mean', initializer=None, shared_embedding_collection_name=None, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True, use_safe_embedding_lookup=True ) This is similar to embedding_column, except that it produces a list of embedding columns that share the same embedding weights. Use this when your inputs are sparse and of the same type (e.g. watched and impression video IDs that share the same vocabulary), and you want to convert them to a dense representation (e.g., to feed to a DNN). Inputs must be a list of categorical columns created by any of the categorical_column_* function. They must all be of the same type and have the same arguments except key. E.g. they can be categorical_column_with_vocabulary_file with the same vocabulary_file. Some or all columns could also be weighted_categorical_column. Here is an example embedding of two features for a DNNClassifier model: watched_video_id = categorical_column_with_vocabulary_file( 'watched_video_id', video_vocabulary_file, video_vocabulary_size) impression_video_id = categorical_column_with_vocabulary_file( 'impression_video_id', video_vocabulary_file, video_vocabulary_size) columns = shared_embedding_columns( [watched_video_id, impression_video_id], dimension=10) estimator = tf.estimator.DNNClassifier(feature_columns=columns, ...) label_column = ... def input_fn(): features = tf.io.parse_example( ..., features=make_parse_example_spec(columns + [label_column])) labels = features.pop(label_column.name) return features, labels estimator.train(input_fn=input_fn, steps=100) Here is an example using shared_embedding_columns with model_fn: def model_fn(features, ...): watched_video_id = categorical_column_with_vocabulary_file( 'watched_video_id', video_vocabulary_file, video_vocabulary_size) impression_video_id = categorical_column_with_vocabulary_file( 'impression_video_id', video_vocabulary_file, video_vocabulary_size) columns = shared_embedding_columns( [watched_video_id, impression_video_id], dimension=10) dense_tensor = input_layer(features, columns) # Form DNN layers, calculate loss, and return EstimatorSpec. ... Args categorical_columns List of categorical columns created by a categorical_column_with_* function. These columns produce the sparse IDs that are inputs to the embedding lookup. All columns must be of the same type and have the same arguments except key. E.g. they can be categorical_column_with_vocabulary_file with the same vocabulary_file. Some or all columns could also be weighted_categorical_column. dimension An integer specifying dimension of the embedding, must be > 0. combiner A string specifying how to reduce if there are multiple entries in a single row. Currently 'mean', 'sqrtn' and 'sum' are supported, with 'mean' the default. 'sqrtn' often achieves good accuracy, in particular with bag-of-words columns. Each of this can be thought as example level normalizations on the column. For more information, see tf.embedding_lookup_sparse. initializer A variable initializer function to be used in embedding variable initialization. If not specified, defaults to truncated_normal_initializer with mean 0.0 and standard deviation 1/sqrt(dimension). shared_embedding_collection_name Optional name of the collection where shared embedding weights are added. If not given, a reasonable name will be chosen based on the names of categorical_columns. This is also used in variable_scope when creating shared embedding weights. ckpt_to_load_from String representing checkpoint name/pattern from which to restore column weights. Required if tensor_name_in_ckpt is not None. tensor_name_in_ckpt Name of the Tensor in ckpt_to_load_from from which to restore the column weights. Required if ckpt_to_load_from is not None. max_norm If not None, each embedding is clipped if its l2-norm is larger than this value, before combining. trainable Whether or not the embedding is trainable. Default is True. use_safe_embedding_lookup If true, uses safe_embedding_lookup_sparse instead of embedding_lookup_sparse. safe_embedding_lookup_sparse ensures there are no empty rows and all weights and ids are positive at the expense of extra compute cost. This only applies to rank 2 (NxM) shaped input tensors. Defaults to true, consider turning off if the above checks are not needed. Note that having empty rows will not trigger any error though the output result might be 0 or omitted. Returns A list of dense columns that converts from sparse input. The order of results follows the ordering of categorical_columns. Raises ValueError if dimension not > 0. ValueError if any of the given categorical_columns is of different type or has different arguments than the others. ValueError if exactly one of ckpt_to_load_from and tensor_name_in_ckpt is specified. ValueError if initializer is specified and is not callable. RuntimeError if eager execution is enabled.
tensorflow.compat.v1.feature_column.shared_embedding_columns
tf.compat.v1.FixedLengthRecordReader A Reader that outputs fixed-length records from a file. Inherits From: ReaderBase tf.compat.v1.FixedLengthRecordReader( record_bytes, header_bytes=None, footer_bytes=None, hop_bytes=None, name=None, encoding=None ) See ReaderBase for supported methods. Args record_bytes An int. header_bytes An optional int. Defaults to 0. footer_bytes An optional int. Defaults to 0. hop_bytes An optional int. Defaults to 0. name A name for the operation (optional). encoding The type of encoding for the file. Defaults to none. Eager Compatibility Readers are not compatible with eager execution. Instead, please use tf.data to get data into your model. Attributes reader_ref Op that implements the reader. supports_serialize Whether the Reader implementation can serialize its state. Methods num_records_produced View source num_records_produced( name=None ) Returns the number of records this reader has produced. This is the same as the number of Read executions that have succeeded. Args name A name for the operation (optional). Returns An int64 Tensor. num_work_units_completed View source num_work_units_completed( name=None ) Returns the number of work units this reader has finished processing. Args name A name for the operation (optional). Returns An int64 Tensor. read View source read( queue, name=None ) Returns the next record (key, value) pair produced by a reader. Will dequeue a work unit from queue if necessary (e.g. when the Reader needs to start reading from a new file since it has finished with the previous file). Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. name A name for the operation (optional). Returns A tuple of Tensors (key, value). key A string scalar Tensor. value A string scalar Tensor. read_up_to View source read_up_to( queue, num_records, name=None ) Returns up to num_records (key, value) pairs produced by a reader. Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch. Args queue A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. num_records Number of records to read. name A name for the operation (optional). Returns A tuple of Tensors (keys, values). keys A 1-D string Tensor. values A 1-D string Tensor. reset View source reset( name=None ) Restore a reader to its initial clean state. Args name A name for the operation (optional). Returns The created Operation. restore_state View source restore_state( state, name=None ) Restore a reader to a previously saved state. Not all Readers support being restored, so this can produce an Unimplemented error. Args state A string Tensor. Result of a SerializeState of a Reader with matching type. name A name for the operation (optional). Returns The created Operation. serialize_state View source serialize_state( name=None ) Produce a string tensor that encodes the state of a reader. Not all Readers support being serialized, so this can produce an Unimplemented error. Args name A name for the operation (optional). Returns A string Tensor.
tensorflow.compat.v1.fixedlengthrecordreader
tf.compat.v1.fixed_size_partitioner Partitioner to specify a fixed number of shards along given axis. tf.compat.v1.fixed_size_partitioner( num_shards, axis=0 ) Args num_shards int, number of shards to partition variable. axis int, axis to partition on. Returns A partition function usable as the partitioner argument to variable_scope and get_variable.
tensorflow.compat.v1.fixed_size_partitioner
Module: tf.compat.v1.flags Import router for absl.flags. See https://github.com/abseil/abseil-py View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags Modules tf_decorator module: Base TFDecorator class and utility functions for working with decorators. Classes class ArgumentParser: Base class used to parse and convert arguments. class ArgumentSerializer: Base class for generating string representations of a flag value. class BaseListParser: Base class for a parser of lists of strings. class BooleanFlag: Basic boolean flag. class BooleanParser: Parser of boolean values. class CantOpenFlagFileError: Raised when flagfile fails to open. class CsvListSerializer: Base class for generating string representations of a flag value. class DuplicateFlagError: Raised if there is a flag naming conflict. class EnumClassFlag: Basic enum flag; its value is an enum class's member. class EnumClassParser: Parser of an Enum class member. class EnumFlag: Basic enum flag; its value can be any string from list of enum_values. class EnumParser: Parser of a string enum value (a string value from a given set). class Error: The base class for all flags errors. class Flag: Information about a command-line flag. class FlagHolder: Holds a defined flag. class FlagNameConflictsWithMethodError: Raised when a flag name conflicts with FlagValues methods. class FlagValues: Registry of 'Flag' objects. class FloatParser: Parser of floating point values. class IllegalFlagValueError: Raised when the flag command line argument is illegal. class IntegerParser: Parser of an integer value. class ListParser: Parser for a comma-separated list of strings. class ListSerializer: Base class for generating string representations of a flag value. class MultiEnumClassFlag: A multi_enum_class flag. class MultiFlag: A flag that can appear multiple time on the command-line. class UnparsedFlagAccessError: Raised when accessing the flag value from unparsed FlagValues. class UnrecognizedFlagError: Raised when a flag is unrecognized. class ValidationError: Raised when flag validator constraint is not satisfied. class WhitespaceSeparatedListParser: Parser for a whitespace-separated list of strings. Functions DEFINE(...): Registers a generic Flag object. DEFINE_alias(...): Defines an alias flag for an existing one. DEFINE_bool(...): Registers a boolean flag. DEFINE_boolean(...): Registers a boolean flag. DEFINE_enum(...): Registers a flag whose value can be any string from enum_values. DEFINE_enum_class(...): Registers a flag whose value can be the name of enum members. DEFINE_flag(...): Registers a 'Flag' object with a 'FlagValues' object. DEFINE_float(...): Registers a flag whose value must be a float. DEFINE_integer(...): Registers a flag whose value must be an integer. DEFINE_list(...): Registers a flag whose value is a comma-separated list of strings. DEFINE_multi(...): Registers a generic MultiFlag that parses its args with a given parser. DEFINE_multi_enum(...): Registers a flag whose value can be a list strings from enum_values. DEFINE_multi_enum_class(...): Registers a flag whose value can be a list of enum members. DEFINE_multi_float(...): Registers a flag whose value can be a list of arbitrary floats. DEFINE_multi_integer(...): Registers a flag whose value can be a list of arbitrary integers. DEFINE_multi_string(...): Registers a flag whose value can be a list of any strings. DEFINE_spaceseplist(...): Registers a flag whose value is a whitespace-separated list of strings. DEFINE_string(...): Registers a flag whose value can be any string. FLAGS(...): Registry of 'Flag' objects. adopt_module_key_flags(...): Declares that all flags key to a module are key to the current module. declare_key_flag(...): Declares one flag as key to the current module. disclaim_key_flags(...): Declares that the current module will not define any more key flags. doc_to_help(...): Takes a doc string and reformats it as help. flag_dict_to_args(...): Convert a dict of values into process call parameters. get_help_width(...): Returns the integer width of help lines that is used in TextWrap. mark_bool_flags_as_mutual_exclusive(...): Ensures that only one flag among flag_names is True. mark_flag_as_required(...): Ensures that flag is not None during program execution. mark_flags_as_mutual_exclusive(...): Ensures that only one flag among flag_names is not None. mark_flags_as_required(...): Ensures that flags are not None during program execution. multi_flags_validator(...): A function decorator for defining a multi-flag validator. register_multi_flags_validator(...): Adds a constraint to multiple flags. register_validator(...): Adds a constraint, which will be enforced during program execution. text_wrap(...): Wraps a given text to a maximum line length and returns it. validator(...): A function decorator for defining a flag validator.
tensorflow.compat.v1.flags
tf.compat.v1.flags.adopt_module_key_flags Declares that all flags key to a module are key to the current module. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.adopt_module_key_flags tf.compat.v1.flags.adopt_module_key_flags( module, flag_values=_flagvalues.FLAGS ) Args module module, the module object from which all key flags will be declared as key flags to the current module. flag_values FlagValues, the FlagValues instance in which the flags will be declared as key flags. This should almost never need to be overridden. Raises Error Raised when given an argument that is a module name (a string), instead of a module object.
tensorflow.compat.v1.flags.adopt_module_key_flags
tf.compat.v1.flags.ArgumentParser Base class used to parse and convert arguments. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.ArgumentParser The parse() method checks to make sure that the string argument is a legal value and convert it to a native type. If the value cannot be converted, it should throw a 'ValueError' exception with a human readable explanation of why the value is illegal. Subclasses should also define a syntactic_help string which may be presented to the user to describe the form of the legal values. Argument parser classes must be stateless, since instances are cached and shared between flags. Initializer arguments are allowed, but all member variables must be derived from initializer arguments only. Methods flag_type flag_type() Returns a string representing the type of the flag. parse parse( argument ) Parses the string argument and returns the native value. By default it returns its argument unmodified. Args argument string argument passed in the commandline. Raises ValueError Raised when it fails to parse the argument. TypeError Raised when the argument has the wrong type. Returns The parsed value in native type. Class Variables syntactic_help ''
tensorflow.compat.v1.flags.argumentparser
tf.compat.v1.flags.ArgumentSerializer Base class for generating string representations of a flag value. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.ArgumentSerializer Methods serialize serialize( value ) Returns a serialized string of the value.
tensorflow.compat.v1.flags.argumentserializer
tf.compat.v1.flags.BaseListParser Base class for a parser of lists of strings. Inherits From: ArgumentParser View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.BaseListParser tf.compat.v1.flags.BaseListParser( token=None, name=None ) To extend, inherit from this class; from the subclass init, call BaseListParser.__init__(self, token, name) where token is a character used to tokenize, and name is a description of the separator. Methods flag_type flag_type() See base class. parse parse( argument ) See base class. Class Variables syntactic_help ''
tensorflow.compat.v1.flags.baselistparser
tf.compat.v1.flags.BooleanFlag Basic boolean flag. Inherits From: Flag View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.BooleanFlag tf.compat.v1.flags.BooleanFlag( name, default, help, short_name=None, **args ) Boolean flags do not take any arguments, and their value is either True (1) or False (0). The false value is specified on the command line by prepending the word 'no' to either the long or the short flag name. For example, if a Boolean flag was created whose long name was 'update' and whose short name was 'x', then this flag could be explicitly unset through either --noupdate or --nox. Attributes value Methods flag_type flag_type() Returns a str that describes the type of the flag. Note: we use strings, and not the types.*Type constants because our flags can have more exotic types, e.g., 'comma separated list of strings', 'whitespace separated list of strings', etc. parse parse( argument ) Parses string and sets flag value. Args argument str or the correct flag value type, argument to be parsed. serialize serialize() Serializes the flag. unparse unparse() __eq__ __eq__( other ) Return self==value. __ge__ __ge__( other, NotImplemented=NotImplemented ) Return a >= b. Computed by @total_ordering from (not a < b). __gt__ __gt__( other, NotImplemented=NotImplemented ) Return a > b. Computed by @total_ordering from (not a < b) and (a != b). __le__ __le__( other, NotImplemented=NotImplemented ) Return a <= b. Computed by @total_ordering from (a < b) or (a == b). __lt__ __lt__( other ) Return self<value.
tensorflow.compat.v1.flags.booleanflag
tf.compat.v1.flags.BooleanParser Parser of boolean values. Inherits From: ArgumentParser View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.BooleanParser Methods flag_type flag_type() See base class. parse parse( argument ) See base class. Class Variables syntactic_help ''
tensorflow.compat.v1.flags.booleanparser
tf.compat.v1.flags.CantOpenFlagFileError Raised when flagfile fails to open. Inherits From: Error View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.CantOpenFlagFileError E.g. the file doesn't exist, or has wrong permissions.
tensorflow.compat.v1.flags.cantopenflagfileerror
tf.compat.v1.flags.CsvListSerializer Base class for generating string representations of a flag value. Inherits From: ArgumentSerializer View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.CsvListSerializer tf.compat.v1.flags.CsvListSerializer( list_sep ) Methods serialize serialize( value ) Serializes a list as a CSV string or unicode.
tensorflow.compat.v1.flags.csvlistserializer
tf.compat.v1.flags.declare_key_flag Declares one flag as key to the current module. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.declare_key_flag tf.compat.v1.flags.declare_key_flag( flag_name, flag_values=_flagvalues.FLAGS ) Key flags are flags that are deemed really important for a module. They are important when listing help messages; e.g., if the --helpshort command-line flag is used, then only the key flags of the main module are listed (instead of all flags, as in the case of --helpfull). Sample usage: flags.declare_key_flag('flag_1') Args flag_name str, the name of an already declared flag. (Redeclaring flags as key, including flags implicitly key because they were declared in this module, is a no-op.) flag_values FlagValues, the FlagValues instance in which the flag will be declared as a key flag. This should almost never need to be overridden. Raises ValueError Raised if flag_name not defined as a Python flag.
tensorflow.compat.v1.flags.declare_key_flag
tf.compat.v1.flags.DEFINE Registers a generic Flag object. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE tf.compat.v1.flags.DEFINE( parser, name, default, help, flag_values=_flagvalues.FLAGS, serializer=None, module_name=None, **args ) Note: in the docstrings of all DEFINE* functions, "registers" is short for "creates a new flag and registers it". Auxiliary function: clients should use the specialized DEFINE_ function instead. Args parser ArgumentParser, used to parse the flag arguments. name str, the flag name. default The default value of the flag. help str, the help message. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. serializer ArgumentSerializer, the flag serializer instance. module_name str, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. **args dict, the extra keyword args that are passed to Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define
tf.compat.v1.flags.DEFINE_alias Defines an alias flag for an existing one. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_alias tf.compat.v1.flags.DEFINE_alias( name, original_name, flag_values=_flagvalues.FLAGS, module_name=None ) Args name str, the flag name. original_name str, the original flag name. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name A string, the name of the module that defines this flag. Returns a handle to defined flag. Raises flags.FlagError UnrecognizedFlagError: if the referenced flag doesn't exist. DuplicateFlagError: if the alias name has been used by some existing flag.
tensorflow.compat.v1.flags.define_alias
tf.compat.v1.flags.DEFINE_bool Registers a boolean flag. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_bool, tf.compat.v1.app.flags.DEFINE_boolean, tf.compat.v1.flags.DEFINE_boolean tf.compat.v1.flags.DEFINE_bool( name, default, help, flag_values=_flagvalues.FLAGS, module_name=None, **args ) Such a boolean flag does not take an argument. If a user wants to specify a false value explicitly, the long option beginning with 'no' must be used: i.e. --noflag This flag will have a value of None, True or False. None is possible if default=None and the user does not specify the flag on the command line. Args name str, the flag name. default bool|str|None, the default value of the flag. help str, the help message. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name str, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. **args dict, the extra keyword args that are passed to Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_bool
tf.compat.v1.flags.DEFINE_enum Registers a flag whose value can be any string from enum_values. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_enum tf.compat.v1.flags.DEFINE_enum( name, default, enum_values, help, flag_values=_flagvalues.FLAGS, module_name=None, **args ) Instead of a string enum, prefer DEFINE_enum_class, which allows defining enums from an enum.Enum class. Args name str, the flag name. default str|None, the default value of the flag. enum_values [str], a non-empty list of strings with the possible values for the flag. help str, the help message. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name str, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. **args dict, the extra keyword args that are passed to Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_enum
tf.compat.v1.flags.DEFINE_enum_class Registers a flag whose value can be the name of enum members. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_enum_class tf.compat.v1.flags.DEFINE_enum_class( name, default, enum_class, help, flag_values=_flagvalues.FLAGS, module_name=None, case_sensitive=False, **args ) Args name str, the flag name. default Enum|str|None, the default value of the flag. enum_class class, the Enum class with all the possible values for the flag. help str, the help message. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name str, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. case_sensitive bool, whether to map strings to members of the enum_class without considering case. **args dict, the extra keyword args that are passed to Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_enum_class
tf.compat.v1.flags.DEFINE_flag Registers a 'Flag' object with a 'FlagValues' object. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_flag tf.compat.v1.flags.DEFINE_flag( flag, flag_values=_flagvalues.FLAGS, module_name=None ) By default, the global FLAGS 'FlagValue' object is used. Typical users will use one of the more specialized DEFINE_xxx functions, such as DEFINE_string or DEFINE_integer. But developers who need to create Flag objects themselves should use this function to register their flags. Args flag Flag, a flag that is key to the module. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name str, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_flag
tf.compat.v1.flags.DEFINE_float Registers a flag whose value must be a float. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_float tf.compat.v1.flags.DEFINE_float( name, default, help, lower_bound=None, upper_bound=None, flag_values=_flagvalues.FLAGS, **args ) If lower_bound or upper_bound are set, then this flag must be within the given range. Args name str, the flag name. default float|str|None, the default value of the flag. help str, the help message. lower_bound float, min value of the flag. upper_bound float, max value of the flag. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. **args dict, the extra keyword args that are passed to DEFINE. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_float
tf.compat.v1.flags.DEFINE_integer Registers a flag whose value must be an integer. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_integer tf.compat.v1.flags.DEFINE_integer( name, default, help, lower_bound=None, upper_bound=None, flag_values=_flagvalues.FLAGS, **args ) If lower_bound, or upper_bound are set, then this flag must be within the given range. Args name str, the flag name. default int|str|None, the default value of the flag. help str, the help message. lower_bound int, min value of the flag. upper_bound int, max value of the flag. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. **args dict, the extra keyword args that are passed to DEFINE. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_integer
tf.compat.v1.flags.DEFINE_list Registers a flag whose value is a comma-separated list of strings. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_list tf.compat.v1.flags.DEFINE_list( name, default, help, flag_values=_flagvalues.FLAGS, **args ) The flag value is parsed with a CSV parser. Args name str, the flag name. default list|str|None, the default value of the flag. help str, the help message. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. **args Dictionary with extra keyword args that are passed to the Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_list
tf.compat.v1.flags.DEFINE_multi Registers a generic MultiFlag that parses its args with a given parser. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_multi tf.compat.v1.flags.DEFINE_multi( parser, serializer, name, default, help, flag_values=_flagvalues.FLAGS, module_name=None, **args ) Auxiliary function. Normal users should NOT use it directly. Developers who need to create their own 'Parser' classes for options which can appear multiple times can call this module function to register their flags. Args parser ArgumentParser, used to parse the flag arguments. serializer ArgumentSerializer, the flag serializer instance. name str, the flag name. default Union[Iterable[T], Text, None], the default value of the flag. If the value is text, it will be parsed as if it was provided from the command line. If the value is a non-string iterable, it will be iterated over to create a shallow copy of the values. If it is None, it is left as-is. help str, the help message. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name A string, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. **args Dictionary with extra keyword args that are passed to the Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_multi
tf.compat.v1.flags.DEFINE_multi_enum Registers a flag whose value can be a list strings from enum_values. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_multi_enum tf.compat.v1.flags.DEFINE_multi_enum( name, default, enum_values, help, flag_values=_flagvalues.FLAGS, case_sensitive=True, **args ) Use the flag on the command line multiple times to place multiple enum values into the list. The 'default' may be a single string (which will be converted into a single-element list) or a list of strings. Args name str, the flag name. default Union[Iterable[Text], Text, None], the default value of the flag; see DEFINE_multi. enum_values [str], a non-empty list of strings with the possible values for the flag. help str, the help message. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. case_sensitive Whether or not the enum is to be case-sensitive. **args Dictionary with extra keyword args that are passed to the Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_multi_enum
tf.compat.v1.flags.DEFINE_multi_enum_class Registers a flag whose value can be a list of enum members. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_multi_enum_class tf.compat.v1.flags.DEFINE_multi_enum_class( name, default, enum_class, help, flag_values=_flagvalues.FLAGS, module_name=None, case_sensitive=False, **args ) Use the flag on the command line multiple times to place multiple enum values into the list. Args name str, the flag name. default Union[Iterable[Enum], Iterable[Text], Enum, Text, None], the default value of the flag; see DEFINE_multi; only differences are documented here. If the value is a single Enum, it is treated as a single-item list of that Enum value. If it is an iterable, text values within the iterable will be converted to the equivalent Enum objects. enum_class class, the Enum class with all the possible values for the flag. help: str, the help message. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name A string, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. case_sensitive bool, whether to map strings to members of the enum_class without considering case. **args Dictionary with extra keyword args that are passed to the Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_multi_enum_class
tf.compat.v1.flags.DEFINE_multi_float Registers a flag whose value can be a list of arbitrary floats. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_multi_float tf.compat.v1.flags.DEFINE_multi_float( name, default, help, lower_bound=None, upper_bound=None, flag_values=_flagvalues.FLAGS, **args ) Use the flag on the command line multiple times to place multiple float values into the list. The 'default' may be a single float (which will be converted into a single-element list) or a list of floats. Args name str, the flag name. default Union[Iterable[float], Text, None], the default value of the flag; see DEFINE_multi. help str, the help message. lower_bound float, min values of the flag. upper_bound float, max values of the flag. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. **args Dictionary with extra keyword args that are passed to the Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_multi_float
tf.compat.v1.flags.DEFINE_multi_integer Registers a flag whose value can be a list of arbitrary integers. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_multi_integer tf.compat.v1.flags.DEFINE_multi_integer( name, default, help, lower_bound=None, upper_bound=None, flag_values=_flagvalues.FLAGS, **args ) Use the flag on the command line multiple times to place multiple integer values into the list. The 'default' may be a single integer (which will be converted into a single-element list) or a list of integers. Args name str, the flag name. default Union[Iterable[int], Text, None], the default value of the flag; see DEFINE_multi. help str, the help message. lower_bound int, min values of the flag. upper_bound int, max values of the flag. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. **args Dictionary with extra keyword args that are passed to the Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_multi_integer
tf.compat.v1.flags.DEFINE_multi_string Registers a flag whose value can be a list of any strings. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_multi_string tf.compat.v1.flags.DEFINE_multi_string( name, default, help, flag_values=_flagvalues.FLAGS, **args ) Use the flag on the command line multiple times to place multiple string values into the list. The 'default' may be a single string (which will be converted into a single-element list) or a list of strings. Args name str, the flag name. default Union[Iterable[Text], Text, None], the default value of the flag; see DEFINE_multi. help str, the help message. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. **args Dictionary with extra keyword args that are passed to the Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_multi_string
tf.compat.v1.flags.DEFINE_spaceseplist Registers a flag whose value is a whitespace-separated list of strings. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_spaceseplist tf.compat.v1.flags.DEFINE_spaceseplist( name, default, help, comma_compat=False, flag_values=_flagvalues.FLAGS, **args ) Any whitespace can be used as a separator. Args name str, the flag name. default list|str|None, the default value of the flag. help str, the help message. comma_compat bool - Whether to support comma as an additional separator. If false then only whitespace is supported. This is intended only for backwards compatibility with flags that used to be comma-separated. flag_values FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. **args Dictionary with extra keyword args that are passed to the Flag init. Returns a handle to defined flag.
tensorflow.compat.v1.flags.define_spaceseplist
tf.compat.v1.flags.DEFINE_string Registers a flag whose value can be any string. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DEFINE_string tf.compat.v1.flags.DEFINE_string( name, default, help, flag_values=_flagvalues.FLAGS, **args )
tensorflow.compat.v1.flags.define_string
tf.compat.v1.flags.disclaim_key_flags Declares that the current module will not define any more key flags. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.disclaim_key_flags tf.compat.v1.flags.disclaim_key_flags() Normally, the module that calls the DEFINE_xxx functions claims the flag to be its key flag. This is undesirable for modules that define additional DEFINE_yyy functions with its own flag parsers and serializers, since that module will accidentally claim flags defined by DEFINE_yyy as its key flags. After calling this function, the module disclaims flag definitions thereafter, so the key flags will be correctly attributed to the caller of DEFINE_yyy. After calling this function, the module will not be able to define any more flags. This function will affect all FlagValues objects.
tensorflow.compat.v1.flags.disclaim_key_flags
tf.compat.v1.flags.doc_to_help Takes a doc string and reformats it as help. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.doc_to_help tf.compat.v1.flags.doc_to_help( doc )
tensorflow.compat.v1.flags.doc_to_help
tf.compat.v1.flags.DuplicateFlagError Raised if there is a flag naming conflict. Inherits From: Error View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.DuplicateFlagError Methods from_flag @classmethod from_flag( flagname, flag_values, other_flag_values=None ) Creates a DuplicateFlagError by providing flag name and values. Args flagname str, the name of the flag being redefined. flag_values FlagValues, the FlagValues instance containing the first definition of flagname. other_flag_values FlagValues, if it is not None, it should be the FlagValues object where the second definition of flagname occurs. If it is None, we assume that we're being called when attempting to create the flag a second time, and we use the module calling this one as the source of the second definition. Returns An instance of DuplicateFlagError.
tensorflow.compat.v1.flags.duplicateflagerror
tf.compat.v1.flags.EnumClassFlag Basic enum flag; its value is an enum class's member. Inherits From: Flag View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.EnumClassFlag tf.compat.v1.flags.EnumClassFlag( name, default, help, enum_class, short_name=None, case_sensitive=False, **args ) Attributes value Methods flag_type flag_type() Returns a str that describes the type of the flag. Note: we use strings, and not the types.*Type constants because our flags can have more exotic types, e.g., 'comma separated list of strings', 'whitespace separated list of strings', etc. parse parse( argument ) Parses string and sets flag value. Args argument str or the correct flag value type, argument to be parsed. serialize serialize() Serializes the flag. unparse unparse() __eq__ __eq__( other ) Return self==value. __ge__ __ge__( other, NotImplemented=NotImplemented ) Return a >= b. Computed by @total_ordering from (not a < b). __gt__ __gt__( other, NotImplemented=NotImplemented ) Return a > b. Computed by @total_ordering from (not a < b) and (a != b). __le__ __le__( other, NotImplemented=NotImplemented ) Return a <= b. Computed by @total_ordering from (a < b) or (a == b). __lt__ __lt__( other ) Return self<value.
tensorflow.compat.v1.flags.enumclassflag
tf.compat.v1.flags.EnumClassParser Parser of an Enum class member. Inherits From: ArgumentParser View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.EnumClassParser tf.compat.v1.flags.EnumClassParser( enum_class, case_sensitive=True ) Args enum_class class, the Enum class with all possible flag values. case_sensitive bool, whether or not the enum is to be case-sensitive. If False, all member names must be unique when case is ignored. Raises TypeError When enum_class is not a subclass of Enum. ValueError When enum_class is empty. Attributes member_names The accepted enum names, in lowercase if not case sensitive. Methods flag_type flag_type() See base class. parse parse( argument ) Determines validity of argument and returns the correct element of enum. Args argument str or Enum class member, the supplied flag value. Returns The first matching Enum class member in Enum class. Raises ValueError Raised when argument didn't match anything in enum. Class Variables syntactic_help ''
tensorflow.compat.v1.flags.enumclassparser
tf.compat.v1.flags.EnumFlag Basic enum flag; its value can be any string from list of enum_values. Inherits From: Flag View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.EnumFlag tf.compat.v1.flags.EnumFlag( name, default, help, enum_values, short_name=None, case_sensitive=True, **args ) Attributes value Methods flag_type flag_type() Returns a str that describes the type of the flag. Note: we use strings, and not the types.*Type constants because our flags can have more exotic types, e.g., 'comma separated list of strings', 'whitespace separated list of strings', etc. parse parse( argument ) Parses string and sets flag value. Args argument str or the correct flag value type, argument to be parsed. serialize serialize() Serializes the flag. unparse unparse() __eq__ __eq__( other ) Return self==value. __ge__ __ge__( other, NotImplemented=NotImplemented ) Return a >= b. Computed by @total_ordering from (not a < b). __gt__ __gt__( other, NotImplemented=NotImplemented ) Return a > b. Computed by @total_ordering from (not a < b) and (a != b). __le__ __le__( other, NotImplemented=NotImplemented ) Return a <= b. Computed by @total_ordering from (a < b) or (a == b). __lt__ __lt__( other ) Return self<value.
tensorflow.compat.v1.flags.enumflag
tf.compat.v1.flags.EnumParser Parser of a string enum value (a string value from a given set). Inherits From: ArgumentParser View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.EnumParser tf.compat.v1.flags.EnumParser( enum_values, case_sensitive=True ) Args enum_values [str], a non-empty list of string values in the enum. case_sensitive bool, whether or not the enum is to be case-sensitive. Raises ValueError When enum_values is empty. Methods flag_type flag_type() See base class. parse parse( argument ) Determines validity of argument and returns the correct element of enum. Args argument str, the supplied flag value. Returns The first matching element from enum_values. Raises ValueError Raised when argument didn't match anything in enum. Class Variables syntactic_help ''
tensorflow.compat.v1.flags.enumparser
tf.compat.v1.flags.Error The base class for all flags errors. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.Error
tensorflow.compat.v1.flags.error
tf.compat.v1.flags.Flag Information about a command-line flag. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.Flag tf.compat.v1.flags.Flag( parser, serializer, name, default, help_string, short_name=None, boolean=False, allow_override=False, allow_override_cpp=False, allow_hide_cpp=False, allow_overwrite=True, allow_using_method_names=False ) 'Flag' objects define the following fields: .name - the name for this flag; .default - the default value for this flag; .default_unparsed - the unparsed default value for this flag. .default_as_str - default value as repr'd string, e.g., "'true'" (or None); .value - the most recent parsed value of this flag; set by parse(); .help - a help string or None if no help is available; .short_name - the single letter alias for this flag (or None); .boolean - if 'true', this flag does not accept arguments; .present - true if this flag was parsed from command line flags; .parser - an ArgumentParser object; .serializer - an ArgumentSerializer object; .allow_override - the flag may be redefined without raising an error, and newly defined flag overrides the old one. .allow_override_cpp - use the flag from C++ if available; the flag definition is replaced by the C++ flag after init; .allow_hide_cpp - use the Python flag despite having a C++ flag with the same name (ignore the C++ flag); .using_default_value - the flag value has not been set by user; .allow_overwrite - the flag may be parsed more than once without raising an error, the last set value will be used; .allow_using_method_names - whether this flag can be defined even if it has a name that conflicts with a FlagValues method. The only public method of a 'Flag' object is parse(), but it is typically only called by a 'FlagValues' object. The parse() method is a thin wrapper around the 'ArgumentParser' parse() method. The parsed value is saved in .value, and the .present attribute is updated. If this flag was already present, an Error is raised. parse() is also called during init to parse the default value and initialize the .value attribute. This enables other python modules to safely use flags even if the main module neglects to parse the command line arguments. The .present attribute is cleared after init parsing. If the default value is set to None, then the init parsing step is skipped and the .value attribute is initialized to None. Note: The default value is also presented to the user in the help string, so it is important that it be a legal value for this flag. Attributes value Methods flag_type flag_type() Returns a str that describes the type of the flag. Note: we use strings, and not the types.*Type constants because our flags can have more exotic types, e.g., 'comma separated list of strings', 'whitespace separated list of strings', etc. parse parse( argument ) Parses string and sets flag value. Args argument str or the correct flag value type, argument to be parsed. serialize serialize() Serializes the flag. unparse unparse() __eq__ __eq__( other ) Return self==value. __ge__ __ge__( other, NotImplemented=NotImplemented ) Return a >= b. Computed by @total_ordering from (not a < b). __gt__ __gt__( other, NotImplemented=NotImplemented ) Return a > b. Computed by @total_ordering from (not a < b) and (a != b). __le__ __le__( other, NotImplemented=NotImplemented ) Return a <= b. Computed by @total_ordering from (a < b) or (a == b). __lt__ __lt__( other ) Return self<value.
tensorflow.compat.v1.flags.flag
tf.compat.v1.flags.FlagHolder Holds a defined flag. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.FlagHolder tf.compat.v1.flags.FlagHolder( flag_values, flag, ensure_non_none_value=False ) This facilitates a cleaner api around global state. Instead of flags.DEFINE_integer('foo', ...) flags.DEFINE_integer('bar', ...) ... def method(): # prints parsed value of 'bar' flag print(flags.FLAGS.foo) # runtime error due to typo or possibly bad coding style. print(flags.FLAGS.baz) it encourages code like FOO_FLAG = flags.DEFINE_integer('foo', ...) BAR_FLAG = flags.DEFINE_integer('bar', ...) ... def method(): print(FOO_FLAG.value) print(BAR_FLAG.value) since the name of the flag appears only once in the source code. Args flag_values The container the flag is registered to. flag The flag object for this flag. ensure_non_none_value Is the value of the flag allowed to be None. Attributes default Returns the default value of the flag. name value Returns the value of the flag. If _ensure_non_none_value is True, then return value is not None. Methods __bool__ __bool__() __eq__ __eq__( other ) Return self==value. __nonzero__ __nonzero__()
tensorflow.compat.v1.flags.flagholder
tf.compat.v1.flags.FlagNameConflictsWithMethodError Raised when a flag name conflicts with FlagValues methods. Inherits From: Error View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.FlagNameConflictsWithMethodError
tensorflow.compat.v1.flags.flagnameconflictswithmethoderror
tf.compat.v1.flags.FLAGS Registry of 'Flag' objects. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.FLAGS tf.compat.v1.flags.FLAGS( *args, **kwargs ) A 'FlagValues' can then scan command line arguments, passing flag arguments through to the 'Flag' objects that it owns. It also provides easy access to the flag values. Typically only one 'FlagValues' object is needed by an application: flags.FLAGS This class is heavily overloaded: 'Flag' objects are registered via setitem: FLAGS['longname'] = x # register a new flag The .value attribute of the registered 'Flag' objects can be accessed as attributes of this 'FlagValues' object, through getattr. Both the long and short name of the original 'Flag' objects can be used to access its value: FLAGS.longname # parsed flag value FLAGS.x # parsed flag value (short name) Command line arguments are scanned and passed to the registered 'Flag' objects through the call method. Unparsed arguments, including argv0 are returned. argv = FLAGS(sys.argv) # scan command line arguments The original registered Flag objects can be retrieved through the use of the dictionary-like operator, getitem: x = FLAGS['longname'] # access the registered Flag object The str() operator of a 'FlagValues' object provides help for all of the registered 'Flag' objects.
tensorflow.compat.v1.flags.flags
tf.compat.v1.flags.FlagValues Registry of 'Flag' objects. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.FlagValues tf.compat.v1.flags.FlagValues() A 'FlagValues' can then scan command line arguments, passing flag arguments through to the 'Flag' objects that it owns. It also provides easy access to the flag values. Typically only one 'FlagValues' object is needed by an application: flags.FLAGS This class is heavily overloaded: 'Flag' objects are registered via setitem: FLAGS['longname'] = x # register a new flag The .value attribute of the registered 'Flag' objects can be accessed as attributes of this 'FlagValues' object, through getattr. Both the long and short name of the original 'Flag' objects can be used to access its value: FLAGS.longname # parsed flag value FLAGS.x # parsed flag value (short name) Command line arguments are scanned and passed to the registered 'Flag' objects through the call method. Unparsed arguments, including argv0 are returned. argv = FLAGS(sys.argv) # scan command line arguments The original registered Flag objects can be retrieved through the use of the dictionary-like operator, getitem: x = FLAGS['longname'] # access the registered Flag object The str() operator of a 'FlagValues' object provides help for all of the registered 'Flag' objects. Methods append_flag_values append_flag_values( flag_values ) Appends flags registered in another FlagValues instance. Args flag_values FlagValues, the FlagValues instance from which to copy flags. append_flags_into_file append_flags_into_file( filename ) Appends all flags assignments from this FlagInfo object to a file. Output will be in the format of a flagfile. Note: MUST mirror the behavior of the C++ AppendFlagsIntoFile from https://github.com/gflags/gflags Args filename str, name of the file. find_module_defining_flag find_module_defining_flag( flagname, default=None ) Return the name of the module defining this flag, or default. Args flagname str, name of the flag to lookup. default Value to return if flagname is not defined. Defaults to None. Returns The name of the module which registered the flag with this name. If no such module exists (i.e. no flag with this name exists), we return default. find_module_id_defining_flag find_module_id_defining_flag( flagname, default=None ) Return the ID of the module defining this flag, or default. Args flagname str, name of the flag to lookup. default Value to return if flagname is not defined. Defaults to None. Returns The ID of the module which registered the flag with this name. If no such module exists (i.e. no flag with this name exists), we return default. flag_values_dict flag_values_dict() Returns a dictionary that maps flag names to flag values. flags_by_module_dict flags_by_module_dict() Returns the dictionary of module_name -> list of defined flags. Returns A dictionary. Its keys are module names (strings). Its values are lists of Flag objects. flags_by_module_id_dict flags_by_module_id_dict() Returns the dictionary of module_id -> list of defined flags. Returns A dictionary. Its keys are module IDs (ints). Its values are lists of Flag objects. flags_into_string flags_into_string() Returns a string with the flags assignments from this FlagValues object. This function ignores flags whose value is None. Each flag assignment is separated by a newline. Note: MUST mirror the behavior of the C++ CommandlineFlagsIntoString from https://github.com/gflags/gflags Returns str, the string with the flags assignments from this FlagValues object. The flags are ordered by (module_name, flag_name). get_flag_value get_flag_value( name, default ) Returns the value of a flag (if not None) or a default value. Args name str, the name of a flag. default Default value to use if the flag value is None. Returns Requested flag value or default. get_flags_for_module get_flags_for_module( module ) Returns the list of flags defined by a module. Args module module|str, the module to get flags from. Returns [Flag], a new list of Flag instances. Caller may update this list as desired none of those changes will affect the internals of this FlagValue instance. get_help get_help( prefix='', include_special_flags=True ) Returns a help string for all known flags. Args prefix str, per-line output prefix. include_special_flags bool, whether to include description of SPECIAL_FLAGS, i.e. --flagfile and --undefok. Returns str, formatted help message. get_key_flags_for_module get_key_flags_for_module( module ) Returns the list of key flags for a module. Args module module|str, the module to get key flags from. Returns [Flag], a new list of Flag instances. Caller may update this list as desired none of those changes will affect the internals of this FlagValue instance. is_gnu_getopt is_gnu_getopt() is_parsed is_parsed() Returns whether flags were parsed. key_flags_by_module_dict key_flags_by_module_dict() Returns the dictionary of module_name -> list of key flags. Returns A dictionary. Its keys are module names (strings). Its values are lists of Flag objects. main_module_help main_module_help() Describes the key flags of the main module. Returns str, describing the key flags of the main module. mark_as_parsed mark_as_parsed() Explicitly marks flags as parsed. Use this when the caller knows that this FlagValues has been parsed as if a call() invocation has happened. This is only a public method for use by things like appcommands which do additional command like parsing. module_help module_help( module ) Describes the key flags of a module. Args module module|str, the module to describe the key flags for. Returns str, describing the key flags of a module. read_flags_from_files read_flags_from_files( argv, force_gnu=True ) Processes command line args, but also allow args to be read from file. Args argv [str], a list of strings, usually sys.argv[1:], which may contain one or more flagfile directives of the form --flagfile="./filename". Note that the name of the program (sys.argv[0]) should be omitted. force_gnu bool, if False, --flagfile parsing obeys the FLAGS.is_gnu_getopt() value. If True, ignore the value and always follow gnu_getopt semantics. Returns A new list which has the original list combined with what we read from any flagfile(s). Raises IllegalFlagValueError Raised when --flagfile is provided with no argument. This function is called by FLAGS(argv). It scans the input list for a flag that looks like: --flagfile=. Then it opens , reads all valid key and value pairs and inserts them into the input list in exactly the place where the --flagfile arg is found. Note that your application's flags are still defined the usual way using absl.flags DEFINE_flag() type functions. Notes (assuming we're getting a commandline of some sort as our input): --> For duplicate flags, the last one we hit should "win". --> Since flags that appear later win, a flagfile's settings can be "weak" if the --flagfile comes at the beginning of the argument sequence, and it can be "strong" if the --flagfile comes at the end. --> A further "--flagfile=" CAN be nested in a flagfile. It will be expanded in exactly the spot where it is found. --> In a flagfile, a line beginning with # or // is a comment. --> Entirely blank lines should be ignored. register_flag_by_module register_flag_by_module( module_name, flag ) Records the module that defines a specific flag. We keep track of which flag is defined by which module so that we can later sort the flags by module. Args module_name str, the name of a Python module. flag Flag, the Flag instance that is key to the module. register_flag_by_module_id register_flag_by_module_id( module_id, flag ) Records the module that defines a specific flag. Args module_id int, the ID of the Python module. flag Flag, the Flag instance that is key to the module. register_key_flag_for_module register_key_flag_for_module( module_name, flag ) Specifies that a flag is a key flag for a module. Args module_name str, the name of a Python module. flag Flag, the Flag instance that is key to the module. remove_flag_values remove_flag_values( flag_values ) Remove flags that were previously appended from another FlagValues. Args flag_values FlagValues, the FlagValues instance containing flags to remove. set_default set_default( name, value ) Changes the default value of the named flag object. The flag's current value is also updated if the flag is currently using the default value, i.e. not specified in the command line, and not set by FLAGS.name = value. Args name str, the name of the flag to modify. value The new default value. Raises UnrecognizedFlagError Raised when there is no registered flag named name. IllegalFlagValueError Raised when value is not valid. set_gnu_getopt set_gnu_getopt( gnu_getopt=True ) Sets whether or not to use GNU style scanning. GNU style allows mixing of flag and non-flag arguments. See http://docs.python.org/library/getopt.html#getopt.gnu_getopt Args gnu_getopt bool, whether or not to use GNU style scanning. unparse_flags unparse_flags() Unparses all flags to the point before any FLAGS(argv) was called. validate_all_flags validate_all_flags() Verifies whether all flags pass validation. Raises AttributeError Raised if validators work with a non-existing flag. IllegalFlagValueError Raised if validation fails for at least one validator. write_help_in_xml_format write_help_in_xml_format( outfile=None ) Outputs flag documentation in XML format. Note: We use element names that are consistent with those used by the C++ command-line flag library, from https://github.com/gflags/gflags We also use a few new elements (e.g., ), but we do not interfere / overlap with existing XML elements used by the C++ library. Please maintain this consistency. Args outfile File object we write to. Default None means sys.stdout. __call__ __call__( argv, known_only=False ) Parses flags from argv; stores parsed flags into this FlagValues object. All unparsed arguments are returned. Args argv a tuple/list of strings. known_only bool, if True, parse and remove known flags; return the rest untouched. Unknown flags specified by --undefok are not returned. Returns The list of arguments not parsed as options, including argv[0]. Raises Error Raised on any parsing error. TypeError Raised on passing wrong type of arguments. ValueError Raised on flag value parsing error. __contains__ __contains__( name ) Returns True if name is a value (flag) in the dict. __getitem__ __getitem__( name ) Returns the Flag object for the flag --name. __iter__ __iter__() __len__ __len__()
tensorflow.compat.v1.flags.flagvalues
tf.compat.v1.flags.flag_dict_to_args Convert a dict of values into process call parameters. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.flag_dict_to_args tf.compat.v1.flags.flag_dict_to_args( flag_map, multi_flags=None ) This method is used to convert a dictionary into a sequence of parameters for a binary that parses arguments using this module. Args flag_map dict, a mapping where the keys are flag names (strings). values are treated according to their type: If value is None, then only the name is emitted. If value is True, then only the name is emitted. If value is False, then only the name prepended with 'no' is emitted. If value is a string then --name=value is emitted. If value is a collection, this will emit --name=value1,value2,value3, unless the flag name is in multi_flags, in which case this will emit --name=value1 --name=value2 --name=value3. Everything else is converted to string an passed as such. multi_flags set, names (strings) of flags that should be treated as multi-flags. Yields sequence of string suitable for a subprocess execution.
tensorflow.compat.v1.flags.flag_dict_to_args
tf.compat.v1.flags.FloatParser Parser of floating point values. Inherits From: ArgumentParser View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.FloatParser tf.compat.v1.flags.FloatParser( lower_bound=None, upper_bound=None ) Parsed value may be bounded to a given upper and lower bound. Methods convert convert( argument ) Returns the float value of argument. flag_type flag_type() See base class. is_outside_bounds is_outside_bounds( val ) Returns whether the value is outside the bounds or not. parse parse( argument ) See base class. Class Variables number_article 'a' number_name 'number' syntactic_help 'a number'
tensorflow.compat.v1.flags.floatparser
tf.compat.v1.flags.get_help_width Returns the integer width of help lines that is used in TextWrap. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.get_help_width tf.compat.v1.flags.get_help_width()
tensorflow.compat.v1.flags.get_help_width
tf.compat.v1.flags.IllegalFlagValueError Raised when the flag command line argument is illegal. Inherits From: Error View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.IllegalFlagValueError
tensorflow.compat.v1.flags.illegalflagvalueerror
tf.compat.v1.flags.IntegerParser Parser of an integer value. Inherits From: ArgumentParser View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.IntegerParser tf.compat.v1.flags.IntegerParser( lower_bound=None, upper_bound=None ) Parsed value may be bounded to a given upper and lower bound. Methods convert convert( argument ) Returns the int value of argument. flag_type flag_type() See base class. is_outside_bounds is_outside_bounds( val ) Returns whether the value is outside the bounds or not. parse parse( argument ) See base class. Class Variables number_article 'an' number_name 'integer' syntactic_help 'an integer'
tensorflow.compat.v1.flags.integerparser
tf.compat.v1.flags.ListParser Parser for a comma-separated list of strings. Inherits From: BaseListParser, ArgumentParser View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.ListParser tf.compat.v1.flags.ListParser() Methods flag_type flag_type() See base class. parse parse( argument ) Parses argument as comma-separated list of strings. Class Variables syntactic_help ''
tensorflow.compat.v1.flags.listparser
tf.compat.v1.flags.ListSerializer Base class for generating string representations of a flag value. Inherits From: ArgumentSerializer View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.ListSerializer tf.compat.v1.flags.ListSerializer( list_sep ) Methods serialize serialize( value ) See base class.
tensorflow.compat.v1.flags.listserializer
tf.compat.v1.flags.mark_bool_flags_as_mutual_exclusive Ensures that only one flag among flag_names is True. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.mark_bool_flags_as_mutual_exclusive tf.compat.v1.flags.mark_bool_flags_as_mutual_exclusive( flag_names, required=False, flag_values=_flagvalues.FLAGS ) Args flag_names [str], names of the flags. required bool. If true, exactly one flag must be True. Otherwise, at most one flag can be True, and it is valid for all flags to be False. flag_values flags.FlagValues, optional FlagValues instance where the flags are defined.
tensorflow.compat.v1.flags.mark_bool_flags_as_mutual_exclusive
tf.compat.v1.flags.mark_flags_as_mutual_exclusive Ensures that only one flag among flag_names is not None. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.mark_flags_as_mutual_exclusive tf.compat.v1.flags.mark_flags_as_mutual_exclusive( flag_names, required=False, flag_values=_flagvalues.FLAGS ) Important note: This validator checks if flag values are None, and it does not distinguish between default and explicit values. Therefore, this validator does not make sense when applied to flags with default values other than None, including other false values (e.g. False, 0, '', []). That includes multi flags with a default value of [] instead of None. Args flag_names [str], names of the flags. required bool. If true, exactly one of the flags must have a value other than None. Otherwise, at most one of the flags can have a value other than None, and it is valid for all of the flags to be None. flag_values flags.FlagValues, optional FlagValues instance where the flags are defined.
tensorflow.compat.v1.flags.mark_flags_as_mutual_exclusive
tf.compat.v1.flags.mark_flags_as_required Ensures that flags are not None during program execution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.mark_flags_as_required tf.compat.v1.flags.mark_flags_as_required( flag_names, flag_values=_flagvalues.FLAGS ) If your module might be imported by others, and you only wish to make the flag required when the module is directly executed, call this method like this: if name == 'main': flags.mark_flags_as_required(['flag1', 'flag2', 'flag3']) app.run() Args flag_names Sequence[str], names of the flags. flag_values flags.FlagValues, optional FlagValues instance where the flags are defined. Raises AttributeError If any of flag name has not already been defined as a flag.
tensorflow.compat.v1.flags.mark_flags_as_required
tf.compat.v1.flags.mark_flag_as_required Ensures that flag is not None during program execution. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.mark_flag_as_required tf.compat.v1.flags.mark_flag_as_required( flag_name, flag_values=_flagvalues.FLAGS ) Registers a flag validator, which will follow usual validator rules. Important note: validator will pass for any non-None value, such as False, 0 (zero), '' (empty string) and so on. If your module might be imported by others, and you only wish to make the flag required when the module is directly executed, call this method like this: if name == 'main': flags.mark_flag_as_required('your_flag_name') app.run() Args flag_name str, name of the flag flag_values flags.FlagValues, optional FlagValues instance where the flag is defined. Raises AttributeError Raised when flag_name is not registered as a valid flag name.
tensorflow.compat.v1.flags.mark_flag_as_required
tf.compat.v1.flags.MultiEnumClassFlag A multi_enum_class flag. Inherits From: MultiFlag, Flag View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.MultiEnumClassFlag tf.compat.v1.flags.MultiEnumClassFlag( name, default, help_string, enum_class, case_sensitive=False, **args ) See the doc for MultiFlag for most behaviors of this class. In addition, this class knows how to handle enum.Enum instances as values for this flag type. Attributes value Methods flag_type flag_type() See base class. parse parse( arguments ) Parses one or more arguments with the installed parser. Args arguments a single argument or a list of arguments (typically a list of default values); a single argument is converted internally into a list containing one item. serialize serialize() Serializes the flag. unparse unparse() __eq__ __eq__( other ) Return self==value. __ge__ __ge__( other, NotImplemented=NotImplemented ) Return a >= b. Computed by @total_ordering from (not a < b). __gt__ __gt__( other, NotImplemented=NotImplemented ) Return a > b. Computed by @total_ordering from (not a < b) and (a != b). __le__ __le__( other, NotImplemented=NotImplemented ) Return a <= b. Computed by @total_ordering from (a < b) or (a == b). __lt__ __lt__( other ) Return self<value.
tensorflow.compat.v1.flags.multienumclassflag
tf.compat.v1.flags.MultiFlag A flag that can appear multiple time on the command-line. Inherits From: Flag View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.MultiFlag tf.compat.v1.flags.MultiFlag( *args, **kwargs ) The value of such a flag is a list that contains the individual values from all the appearances of that flag on the command-line. See the doc for Flag for most behavior of this class. Only differences in behavior are described here: The default value may be either a single value or an iterable of values. A single value is transformed into a single-item list of that value. The value of the flag is always a list, even if the option was only supplied once, and even if the default value is a single value Attributes value Methods flag_type flag_type() See base class. parse parse( arguments ) Parses one or more arguments with the installed parser. Args arguments a single argument or a list of arguments (typically a list of default values); a single argument is converted internally into a list containing one item. serialize serialize() Serializes the flag. unparse unparse() __eq__ __eq__( other ) Return self==value. __ge__ __ge__( other, NotImplemented=NotImplemented ) Return a >= b. Computed by @total_ordering from (not a < b). __gt__ __gt__( other, NotImplemented=NotImplemented ) Return a > b. Computed by @total_ordering from (not a < b) and (a != b). __le__ __le__( other, NotImplemented=NotImplemented ) Return a <= b. Computed by @total_ordering from (a < b) or (a == b). __lt__ __lt__( other ) Return self<value.
tensorflow.compat.v1.flags.multiflag
tf.compat.v1.flags.multi_flags_validator A function decorator for defining a multi-flag validator. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.multi_flags_validator tf.compat.v1.flags.multi_flags_validator( flag_names, message='Flag validation failed', flag_values=_flagvalues.FLAGS ) Registers the decorated function as a validator for flag_names, e.g. @flags.multi_flags_validator(['foo', 'bar']) def _CheckFooBar(flags_dict): ... See register_multi_flags_validator() for the specification of checker function. Args flag_names [str], a list of the flag names to be checked. message str, error text to be shown to the user if checker returns False. If checker raises flags.ValidationError, message from the raised error will be shown. flag_values flags.FlagValues, optional FlagValues instance to validate against. Returns A function decorator that registers its function argument as a validator. Raises AttributeError Raised when a flag is not registered as a valid flag name.
tensorflow.compat.v1.flags.multi_flags_validator
tf.compat.v1.flags.register_multi_flags_validator Adds a constraint to multiple flags. View aliases Compat aliases for migration See Migration guide for more details. tf.compat.v1.app.flags.register_multi_flags_validator tf.compat.v1.flags.register_multi_flags_validator( flag_names, multi_flags_checker, message='Flags validation failed', flag_values=_flagvalues.FLAGS ) The constraint is validated when flags are initially parsed, and after each change of the corresponding flag's value. Args flag_names [str], a list of the flag names to be checked. multi_flags_checker callable, a function to validate the flag. input - dict, with keys() being flag_names, and value for each key being the value of the corresponding flag (string, boolean, etc). output - bool, True if validator constraint is satisfied. If constraint is not satisfied, it should either return False or raise flags.ValidationError. message str, error text to be shown to the user if checker returns False. If checker raises flags.ValidationError, message from the raised error will be shown. flag_values flags.FlagValues, optional FlagValues instance to validate against. Raises AttributeError Raised when a flag is not registered as a valid flag name.
tensorflow.compat.v1.flags.register_multi_flags_validator