Train Time

#1
by UphamProjects - opened

I'm trying to train the model and I can get the script to work, but the training time is kind of long.
image.png

I'm likely doing something wrong but any enlightenment would be appreciated. I'm not planning to use that exact dataset, but I have another one I'd like to pass the model, but I'm trying to see how fast it can be trained.

Owner

Hi you are right but i cant solve at the moment. Some peoples like you , get some issues about training time. idk why is too long. I'll solve this , maybe i will change implentation.

How fast is it in your experience @Q-bert ?

I'm trying to see the difference between your model and using the architecture used here https://github.com/state-spaces/mamba/tree/main. I can train both but your script takes much longer to train through the state-spaces' implementation. I also can't seem to load your model into the state spaces architecture like I can cli-brain's and haven-hq's implementation. Your 370 is a better starting point vs https://huggingface.co/state-spaces/mamba-370m for fine tuning, but the difference in train speed is not insignificant and I can't figure out what you're doing differently.

Here's the code for how I ran through a training with the 370m from state-spaces using your train method - https://colab.research.google.com/drive/1ShX0bE0OuDyBOR_7YzuhasSe02zzbUy-?usp=sharing
It's the difference of 8 hours for the databricks train set for the above and 8 days for the below (I either have imdb or databricks in there now but it was still a matter of days)
This is how I trained using yours - https://colab.research.google.com/drive/199DTxoqJFRwrsykIbZpuIxVd40RCP-LJ?usp=sharing

The inference is faster for the 2.8b here https://huggingface.co/clibrain/mamba-2.8b-instruct-openhermes if it's done through the state-spaces architecture directly but I can't load your 370m. The 370m you put out performs well, but hangs on a sequence of say 2000, whereas the 2.8b I used here can tackle full email chains(which tend between 1.5 and 3k tokens) about 500-700 an hour with t4 gpu on colab.

Hi, I'm facing the exact same issue, but when you look at the mamba paper, they talk about different type of memory on the GPU and in there code they made some serious optimization of the memory. This is why to run their model you need CUDA and NVCC.

Sign up or log in to comment