Safetensors
llama3_SAE
custom_code
felfri commited on
Commit
850bb68
·
verified ·
1 Parent(s): a735a1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -7
README.md CHANGED
@@ -18,22 +18,18 @@ Load the model weights from HuggingFace:
18
  ```python
19
  from transformers import AutoModelForCausalLM, AutoTokenizer
20
 
 
21
  SCAR = AutoModelForCausalLM.from_pretrained(
22
  "AIML-TUDA/SCAR",
23
  trust_remote_code=True,
24
- device_map = 'cuda',
25
  )
26
- ```
27
-
28
- The model loaded model is based on LLama3-8B base. So we can use the tokenizer from it:
29
-
30
- ```python
31
  tokenizer = AutoTokenizer.from_pretrained(
32
  "meta-llama/Meta-Llama-3-8B", padding_side="left"
33
  )
34
  tokenizer.pad_token = tokenizer.eos_token
35
  text = "This is text."
36
- inputs = tokenizer(text, return_tensors="pt", padding=True).to('cuda')
37
  ```
38
 
39
  To modify the latent feature $h_0$ (`SCAR.hook.mod_features = 0`) of the SAE do the following:
 
18
  ```python
19
  from transformers import AutoModelForCausalLM, AutoTokenizer
20
 
21
+ device = 'cuda'
22
  SCAR = AutoModelForCausalLM.from_pretrained(
23
  "AIML-TUDA/SCAR",
24
  trust_remote_code=True,
25
+ device_map = device,
26
  )
 
 
 
 
 
27
  tokenizer = AutoTokenizer.from_pretrained(
28
  "meta-llama/Meta-Llama-3-8B", padding_side="left"
29
  )
30
  tokenizer.pad_token = tokenizer.eos_token
31
  text = "This is text."
32
+ inputs = tokenizer(text, return_tensors="pt", padding=True).to(device)
33
  ```
34
 
35
  To modify the latent feature $h_0$ (`SCAR.hook.mod_features = 0`) of the SAE do the following: