Running models on the Google Colab / Macbook pro m1

#11
by ajayyadav2055 - opened

Hello Creators, i tried the demo in the defog web version with my custom table and it showed some great results . however, when i use it in the Google colab using the TPU, it gets crashed even with 12GB System ram.
I wanted to understand that how do i use these 7B models or even 34B models with low end configurations. Is there any way i can even use it or we have to use the high end configurations.

@rishdotblog if you have any better approach to the above, please let the community know. Thanks

Defog.ai org

Hi @ajayyadav2055 , thanks for giving our model a run. Unfortunately we've not tried out hosting the model in TPU's, although our prior experience with GPU's would suggest that you would need at least 14GB of RAM to host the model in half precision (2 bytes per parameter), for which 12GB would be insufficient. You can try TPU architecture specific post-training quantization methods (https://cloud.google.com/blog/products/compute/accurate-quantized-training-aqt-for-tpu-v5e), but you might see the quality of the results to degrade slightly while doing so.

hey @jp-defog , thanks for the info. Can we do the sharding of this model like HuggingFaceH4/zephyr-7b-alpha model, to make it run on the google colab in 1GB of chunks.

@ajayyadav2055 You can try https://huggingface.co/PipableAI/pip-sql-1.3b this model for text-to-sql in 1.3B parameters. It surpasses GPT-3.5 in spider dataset.

Defog.ai org

Hi @ajayyadav2055 could you share more on how this chunking is done? I don't use collab much but am familiar with huggingface's default pipeline parallelism when running in a single node distributed GPU setting.

Sign up or log in to comment