error on Google Colab GPU T4

#2
by Clausss - opened

ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla T4 GPU has compute capability 7.5. You can use float16 instead by explicitly setting thedtype flag in CLI, for example: --dtype=half.

there is my code

from bonito import Bonito, SamplingParams
from datasets import load_dataset

bonito = Bonito("BatsResearch/bonito-v1")

unannotated_text = load_dataset("csv", data_files="datasets.csv")

sampling_params = SamplingParams(max_tokens=756, top_p=0.95, temperature=0.1, n=1)
synthetic_dataset = bonito.generate_tasks(
unannotated_text,
context_col="input",
task_type="nli",
sampling_params=sampling_params
)

You can use dtype parameter like shown below:

...
from bonito import Bonito
bonito = Bonito("BatsResearch/bonito-v1",  dtype="float16")
...

For future reference, you can read the docs of any package that you might be using with help(<class name/function name>).

I have "out of memory" when loading bonito on T4. Do you have plans to make a version that can run on this GPU?

Bats Research org

Hello everyone!
We have a tutorial for how to use a quantized version of the model in a Google Colab T4 instance: https://colab.research.google.com/drive/1tfAqUsFaLWLyzhnd1smLMGcDXSzOwp9r?usp=sharing

Hope this helps :)

Thanks, this is exactly what I needed!

nihalnayak changed discussion status to closed
This comment has been hidden

Sign up or log in to comment