Option: Using GPU with quantized model

#2
by alfredplpl - opened

I ran the code. But, the code ran on lower speed because the code ran on CPU .

import transformers
pipeline = transformers.pipeline("text-generation", model="pfnet/plamo-13b", trust_remote_code=True)
prompt = """
PLaMo-13B is a LLaMA-based 13B model pre-trained on English and Japanese open datasets, developed by Preferred Networks, Inc. PLaMo-13B is released under Apache v2.0 license.
I translate the above mentioned sentences to Japanese:
"""
print(pipeline(text_inputs=prompt, max_new_tokens=512))

So, I recommend the code instead of the above mentioned code.

pipeline = transformers.pipeline("text-generation", model="pfnet/plamo-13b", device_map="auto",model_kwargs={"load_in_8bit": True},trust_remote_code=True)

The code needs pip install accelerate bitsandbytes scipy but the code runs on higher speed on GPU.

In addition, I ran the code.

import transformers
pipeline = transformers.pipeline("text-generation", model="pfnet/plamo-13b", device_map="auto",model_kwargs={"load_in_8bit": True},trust_remote_code=True)
prompt = """次の英文は私が和訳する予定の文章です。
'PLaMo-13B is a LLaMA-based 13B model pre-trained on English and Japanese open datasets, developed by Preferred Networks, Inc. PLaMo-13B is released under Apache v2.0 license.' 
私は以上の英文を次のような日本語に翻訳します。"""
print(pipeline(text_inputs=prompt, max_new_tokens=128))

Then, I got the result.

Loading checkpoint shards: 100%|██████████| 3/3 [00:06<00:00,  2.16s/it]
[{'generated_text': "次の英文は私が和訳する予定の文章です。\n'PLaMo-13B is a LLaMA-based 13B model pre-trained on English and Japanese open datasets, developed by Preferred Networks, Inc. PLaMo-13B is released under Apache v2.0 license.' \n私は以上の英文を次のような日本語に翻訳します。\n「PLaMo-13Bは、英語と日本語のオープンデータセットで訓練された、13Bモデルです。PLaMo-13BはApache v2.0ライセンスの下で公開されています。」\nこの英文を和訳するにあたって、以下の点に注意しました。\n1. 「13B」は「13ビット」の略です。\n2. 「13B」は「13ビット」の略です。\n3. 「13B」は「13ビット」の略です。\n4. 「13B」は"}]

Thank you for your suggestion!

You're right, using such options could speed up the model's inference. However, this requires additional libraries and GPU. Meanwhile, we've confirmed our example can run without GPU.

There are indeed many other options to try, such as torch_dtype="auto" to reduce memory usage. However, adding too many options could potentially confuse users. Therefore, we've decided to stick with the simplest setup in our example.

Thanks again for your input! It helps us continue to improve our examples.

Hey! In order to put the pipeline on device you should use:

pipeline = transformers.pipeline("text-generation", model="pfnet/plamo-13b", trust_remote_code=True, device = "cuda")

Thanks for your comment! While device="cuda" allows the model to run on NVIDIA GPUs, it's not suitable for other hardware. For instance, device="cuda" isn't compatible with MacBooks. Our goal is to make this example as straightforward as possible.

However, we greatly appreciate your valuable suggestions!

dhigurashi changed discussion status to closed

Sign up or log in to comment