Aspik101 commited on
Commit
47d1fc6
1 Parent(s): 78ff9a7

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - tatsu-lab/alpaca
5
+ ---
6
+
7
+ This repo contains a low-rank adapter for LLaMA-13b fit on the Stanford Alpaca dataset.
8
+
9
+ ### How to use (8-bit)
10
+
11
+ ```python
12
+ import torch
13
+ from peft import PeftModel
14
+ from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
15
+
16
+ tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-13b-hf")
17
+ model = LlamaForCausalLM.from_pretrained(
18
+ "decapoda-research/llama-13b-hf",
19
+ load_in_8bit=True,
20
+ torch_dtype=torch.float16,
21
+ device_map="auto",
22
+ )
23
+ model = PeftModel.from_pretrained(
24
+ model, "baruga/alpaca-lora-13b",
25
+ torch_dtype=torch.float16
26
+ )
27
+ ```
28
+
29
+ For further information, check out this Github repo: https://github.com/tloen/alpaca-lora.