( model: PreTrainedModel quantizer: typing.Optional[optimum.intel.neural_compressor.quantization.IncQuantizer] = None pruner: typing.Optional[optimum.intel.neural_compressor.pruning.IncPruner] = None distiller: typing.Optional[optimum.intel.neural_compressor.distillation.IncDistiller] = None one_shot_optimization: bool = True eval_func: typing.Optional[typing.Callable] = None train_func: typing.Optional[typing.Callable] = None )
( save_directory: typing.Union[str, os.PathLike, NoneType] = None )
Save the optimized model as well as its corresponding configuration to a directory, so that it can be re-loaded.
( config: typing.Union[str, optimum.intel.neural_compressor.configuration.IncPruningConfig] eval_func: typing.Optional[typing.Callable] train_func: typing.Optional[typing.Callable] )
( config: typing.Union[str, optimum.intel.neural_compressor.configuration.IncDistillationConfig] teacher_model: PreTrainedModel eval_func: typing.Optional[typing.Callable] train_func: typing.Optional[typing.Callable] )
( config: typing.Union[str, optimum.intel.neural_compressor.configuration.IncQuantizationConfig] eval_func: typing.Optional[typing.Callable] train_func: typing.Optional[typing.Callable] = None calib_dataloader: typing.Optional[torch.utils.data.dataloader.DataLoader] = None calib_func: typing.Optional[typing.Callable] = None )
( model: typing.Union[transformers.modeling_utils.PreTrainedModel, torch.nn.modules.module.Module] = None args: TrainingArguments = None data_collator: typing.Optional[DataCollator] = None train_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None eval_dataset: typing.Optional[torch.utils.data.dataset.Dataset] = None tokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None model_init: typing.Callable[[], transformers.modeling_utils.PreTrainedModel] = None compute_metrics: typing.Union[typing.Callable[[transformers.trainer_utils.EvalPrediction], typing.Dict], NoneType] = None callbacks: typing.Optional[typing.List[transformers.trainer_callback.TrainerCallback]] = None optimizers: typing.Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None) preprocess_logits_for_metrics: typing.Callable[[torch.Tensor, torch.Tensor], torch.Tensor] = None )
How the distillation loss is computed given the student and teacher outputs.
How the loss is computed. By default, all models return the loss in the first element.
Will save the model, so you can reload it using from_pretrained()
.
Will only save from the main process.
( agent: typing.Optional[neural_compressor.experimental.component.Component] = None resume_from_checkpoint: typing.Union[str, bool, NoneType] = None trial: typing.Union[ForwardRef('optuna.Trial'), typing.Dict[str, typing.Any]] = None ignore_keys_for_eval: typing.Optional[typing.List[str]] = None **kwargs )
Parameters
Component
, optional) —
Component object containing the compression objects to apply during the training process.
Main training entry point.
( model_name_or_path: str inc_config: typing.Union[optimum.intel.neural_compressor.configuration.IncOptimizedConfig, str] = None q_model_name: typing.Optional[str] = None **kwargs ) → q_model
Parameters
IncOptimizedConfig
,IncOptimizedConfig.from_pretrained
.revision
can be any
identifier allowed by git.
Returns
q_model
Quantized model.
Instantiate a quantized pytorch model from a given Intel Neural Compressor configuration file.
( *args **kwargs )
( *args **kwargs )
( *args **kwargs )
( *args **kwargs )