Optimum documentation

Gaudi Configuration

You are viewing v1.14.0 version. A newer version v1.23.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Gaudi Configuration

In order to make the most of Gaudi, it is advised to rely on advanced features such as Habana Mixed Precision or optimized operators. You can specify which features to use in a Gaudi configuration, which will take the form of a JSON file following this template:

{
  "use_habana_mixed_precision": true/false,
  "hmp_is_verbose": true/false,
  "use_fused_adam": true/false,
  "use_fused_clip_norm": true/false,
  "hmp_bf16_ops": [
    "torch operator to compute in bf16",
    "..."
  ],
  "hmp_fp32_ops": [
    "torch operator to compute in fp32",
    "..."
  ]
}

Here is a description of each configuration parameter:

  • use_habana_mixed_precision enables to decide whether or not Habana Mixed Precision (HMP) should be used. HMP allows to mix fp32 and bf16 operations. You can find more information here.
  • hmp_is_verbose enables to decide whether to log precision decisions for each operation for debugging purposes. It is disabled by default. You can find an example of such log here.
  • use_fused_adam enables to decide whether to use the custom fused implementation of the ADAM optimizer provided by Habana.
  • use_fused_clip_norm enables to decide whether to use the custom fused implementation of gradient norm clipping provided by Habana.
  • hmp_bf16_ops enables to specify the Torch operations that should be computed in bf16. You can find more information about casting rules here.
  • hmp_fp32_ops enables to specify the Torch operations that should be computed in fp32. You can find more information about casting rules here.

hmp_is_verbose, hmp_bf16_ops and hmp_fp32_ops will not be used if use_habana_mixed_precision is false.

You can find examples of Gaudi configurations in the Habana model repository on the Hugging Face Hub. For instance, for BERT Large we have:

{
  "use_habana_mixed_precision": true,
  "hmp_is_verbose": false,
  "use_fused_adam": true,
  "use_fused_clip_norm": true,
  "hmp_bf16_ops": [
    "add",
    "addmm",
    "bmm",
    "div",
    "dropout",
    "gelu",
    "iadd",
    "linear",
    "layer_norm",
    "matmul",
    "mm",
    "rsub",
    "softmax",
    "truediv"
  ],
  "hmp_fp32_ops": [
    "embedding",
    "nll_loss",
    "log_softmax"
  ]
}

To instantiate yourself a Gaudi configuration in your script, you can do the following

from optimum.habana import GaudiConfig

gaudi_config = GaudiConfig.from_pretrained(
    gaudi_config_name,
    cache_dir=model_args.cache_dir,
    revision=model_args.model_revision,
    use_auth_token=True if model_args.use_auth_token else None,
)

and pass it to the trainer with the gaudi_config argument.

GaudiConfig

class optimum.habana.GaudiConfig

< >

( **kwargs )