license: other | |
language: | |
- en | |
pipeline_tag: text-generation | |
inference: false | |
tags: | |
- transformers | |
- gguf | |
- imatrix | |
- ALMA-13B-R | |
Quantizations of https://huggingface.co/haoranxu/ALMA-13B-R | |
# From original readme | |
A quick start to use our best system (ALMA-13B-R) for translation. An example of translating "我爱机器翻译。" into English: | |
``` | |
import torch | |
from transformers import AutoModelForCausalLM | |
from transformers import AutoTokenizer | |
# Load base model and LoRA weights | |
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-R", torch_dtype=torch.float16, device_map="auto") | |
tokenizer = AutoTokenizer.from_pretrained("haoranxu/ALMA-13B-R", padding_side='left') | |
# Add the source sentence into the prompt template | |
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:" | |
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda() | |
# Translation | |
with torch.no_grad(): | |
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9) | |
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) | |
print(outputs) | |
``` |