Text Generation
Transformers
Safetensors
dbrx
custom_code
text-generation-inference

Errors During Training for the Original Implementation and the Fixes for the Errors

#7
by v2ray - opened

https://huggingface.co/v2ray/dbrx-base-fixed
The original DBRX implementation code has a few bugs which only affect training, which I fixed in my re-upload.
I re-uploaded because the changes require the weights files to be converted, so if anyone want to use the fix you need to re-download the entire weights!

The issues - How I fixed them:

  1. Error when using gradient checkpointing - Fixed by using positional arguments instead because _gradient_checkpointing_func doesn't support kwargs.
  2. VRAM usage go zoom and CUDA Out of Memory when backpropping through the MLP layer - Fixed by separating the experts' weights into different tensors instead of using a single tensor for all the experts. IDK why this fixed it but maybe it's because torch is trying to compute gradient for every expert at once, which shouldn't happen since it's a MoE model.
Owner
β€’
edited Mar 29

Hey thanks for this.
I will not fix this on my side since you have done it, and will try to keep the repo as 1:1 from the original.
Nice work tho!

Undi95 pinned discussion

Sign up or log in to comment