Edit model card

Quantization of madlad400-3b-mt using Ctranslate2 for running on CPU.

Example usage:

import ctranslate2, transformers
from huggingface_hub import snapshot_download

model_path = snapshot_download("zenoverflow/madlad400-3b-mt-int8-float32")
print("\n", end="")

translator = ctranslate2.Translator(model_path, device="cpu")
tokenizer = transformers.T5Tokenizer.from_pretrained(model_path)

target_lang_code = "ja"

source_text = "This sentence has no meaning."

input_text = f"<2{target_lang_code}> {source_text}"
input_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(input_text))
results = translator.translate_batch([input_tokens])
output_tokens = results[0].hypotheses[0]
output_text = tokenizer.decode(tokenizer.convert_tokens_to_ids(output_tokens))

print(output_text)
Downloads last month
7
Inference Examples
Inference API (serverless) has been turned off for this model.