danielhanchen commited on
Commit
095bfc5
1 Parent(s): 18767ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -5,6 +5,19 @@ language:
5
  ---
6
 
7
  Original model from https://huggingface.co/openlm-research/open_llama_3b_600bt_preview.
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  This repo includes:
10
  1) Ported `LlamaTokenizer` to `LlamaTokenizerFast` via a few lines of code.
 
5
  ---
6
 
7
  Original model from https://huggingface.co/openlm-research/open_llama_3b_600bt_preview.
8
+ Example below edited from https://github.com/openlm-research/open_llama
9
+ ```
10
+ import torch
11
+ from transformers import AutoTokenizer, AutoModelForCausalLM
12
+ model_name = "danielhanchen/open_llama_3b_600bt_preview"
13
+
14
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
15
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype = torch.float16, device_map = "auto")
16
+
17
+ prompt = "Q: What is the largest animal?\nA:"
18
+ input_ids = tokenizer(prompt, return_tensors = "pt").input_ids
19
+ print( tokenizer.decode( model.generate( input_ids, max_new_tokens = 32).ravel() ) )
20
+ ```
21
 
22
  This repo includes:
23
  1) Ported `LlamaTokenizer` to `LlamaTokenizerFast` via a few lines of code.