Remove hmp from gaudi_config.json and README

#2
No description provided.
Habana AI org

@jwieczorekhabana We had changed the field name from disable_autocast to use_torch_autocast in your PR.
So in the README we could have:

`use_torch_autocast`: whether to use Torch Autocast for managing mixed precision

Besides, I think we should set use_torch_autocast to True in the Gaudi config because some users may have older scripts without the bf16 arg.

Habana AI org

@jwieczorekhabana I'm getting an error for Wav2Vec2 with Torch Autocast:

RuntimeError: synNodeCreateWithId failed for node: batch_gemm with synStatus 26 [Generice failure]. .

I guess we'll need to define custom bf16 ops. Let's wait a few days so that I finish and merge this PR (which also contains a fix for GPT2 so that it works with Autocast): https://github.com/huggingface/optimum-habana/pull/308
I'll give you a new update after that is done.

regisss changed pull request status to merged

Sign up or log in to comment