Aspik101 commited on
Commit
23b91de
1 Parent(s): 47d1fc6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -13
README.md CHANGED
@@ -4,7 +4,7 @@ datasets:
4
  - tatsu-lab/alpaca
5
  ---
6
 
7
- This repo contains a low-rank adapter for LLaMA-13b fit on the Stanford Alpaca dataset.
8
 
9
  ### How to use (8-bit)
10
 
@@ -13,17 +13,14 @@ import torch
13
  from peft import PeftModel
14
  from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
15
 
16
- tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-13b-hf")
17
- model = LlamaForCausalLM.from_pretrained(
18
- "decapoda-research/llama-13b-hf",
19
- load_in_8bit=True,
20
- torch_dtype=torch.float16,
21
- device_map="auto",
22
- )
23
- model = PeftModel.from_pretrained(
24
- model, "baruga/alpaca-lora-13b",
25
- torch_dtype=torch.float16
26
  )
27
- ```
28
 
29
- For further information, check out this Github repo: https://github.com/tloen/alpaca-lora.
 
 
 
4
  - tatsu-lab/alpaca
5
  ---
6
 
7
+ This repo contains a low-rank adapter for LLaMA-7b fit on the Stanford Alpaca dataset translated to Polish language.
8
 
9
  ### How to use (8-bit)
10
 
 
13
  from peft import PeftModel
14
  from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
15
 
16
+ tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-13b-hf")
17
+
18
+ model = LLaMAForCausalLM.from_pretrained(
19
+ "decapoda-research/llama-13b-hf",
20
+ load_in_8bit=True,
21
+ device_map="auto",
 
 
 
 
22
  )
 
23
 
24
+
25
+ model = PeftModel.from_pretrained(model, "lora-alpaca-pl4-13")
26
+ ```