duyntnet commited on
Commit
623cc4d
1 Parent(s): 15a3f8f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - ALMA-13B-R
12
+ ---
13
+ Quantizations of https://huggingface.co/haoranxu/ALMA-13B-R
14
+
15
+
16
+ # From original readme
17
+
18
+ A quick start to use our best system (ALMA-13B-R) for translation. An example of translating "我爱机器翻译。" into English:
19
+ ```
20
+ import torch
21
+ from transformers import AutoModelForCausalLM
22
+ from transformers import AutoTokenizer
23
+
24
+ # Load base model and LoRA weights
25
+ model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-R", torch_dtype=torch.float16, device_map="auto")
26
+ tokenizer = AutoTokenizer.from_pretrained("haoranxu/ALMA-13B-R", padding_side='left')
27
+
28
+ # Add the source sentence into the prompt template
29
+ prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
30
+ input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
31
+
32
+ # Translation
33
+ with torch.no_grad():
34
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
35
+ outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
36
+ print(outputs)
37
+ ```