Gaudi Configuration

In order to make the most of Gaudi, it is advised to rely on advanced features such as Habana Mixed Precision or optimized operators. You can specify which features to use in a Gaudi configuration, which will take the form of a JSON file following this template:

{
  "use_habana_mixed_precision": true/false,
  "hmp_opt_level": "O1"/"O2",
  "hmp_is_verbose": true/false,
  "use_fused_adam": true/false,
  "use_fused_clip_norm": true/false,
  "hmp_bf16_ops": [
    "torch operator to compute in bf16",
    "..."
  ],
  "hmp_fp32_ops": [
    "torch operator to compute in fp32",
    "..."
  ]
}

Here is a description of each configuration parameter:

hmp_opt_level, hmp_is_verbose, hmp_bf16_ops and hmp_fp32_ops will not be used if use_habana_mixed_precision is false.

You can find examples of Gaudi configurations in the Habana model repository on the Hugging Face Hub. For instance, for BERT Large we have:

{
  "use_habana_mixed_precision": true,
  "hmp_opt_level": "O1",
  "hmp_is_verbose": false,
  "use_fused_adam": true,
  "use_fused_clip_norm": true,
  "hmp_bf16_ops": [
    "add",
    "addmm",
    "bmm",
    "div",
    "dropout",
    "gelu",
    "iadd",
    "linear",
    "layer_norm",
    "matmul",
    "mm",
    "rsub",
    "softmax",
    "truediv"
  ],
  "hmp_fp32_ops": [
    "embedding",
    "nll_loss",
    "log_softmax"
  ]
}

GaudiConfig

class optimum.habana.GaudiConfig

< >

( **kwargs )