Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
21 |
SCAR = AutoModelForCausalLM.from_pretrained(
|
22 |
"AIML-TUDA/SCAR",
|
23 |
trust_remote_code=True,
|
|
|
24 |
)
|
25 |
```
|
26 |
|
@@ -47,6 +48,7 @@ output = SCAR.generate(
|
|
47 |
max_new_tokens=32,
|
48 |
pad_token_id=tokenizer.eos_token_id,
|
49 |
)
|
|
|
50 |
```
|
51 |
The example above will decrease toxicity. To increase the toxicity one would set `SCAR.hook.mod_scaling = 100.0`. To modify nothing simply set `SCAR.hook.mod_features = None`.
|
52 |
|
@@ -60,7 +62,7 @@ poetry install
|
|
60 |
|
61 |
The scripts for generating the training data are located in `./create_training_data`.
|
62 |
The training script is written for a Determined cluster but should be easily adaptable for other training frameworks. The corresponding script is located here `./llama3_SAE/determined_trails.py`.
|
63 |
-
Some the evaluation functions are located in `./evaluations`.
|
64 |
|
65 |
# Citation
|
66 |
```bibtex
|
|
|
21 |
SCAR = AutoModelForCausalLM.from_pretrained(
|
22 |
"AIML-TUDA/SCAR",
|
23 |
trust_remote_code=True,
|
24 |
+
device_map = 'cuda',
|
25 |
)
|
26 |
```
|
27 |
|
|
|
48 |
max_new_tokens=32,
|
49 |
pad_token_id=tokenizer.eos_token_id,
|
50 |
)
|
51 |
+
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
52 |
```
|
53 |
The example above will decrease toxicity. To increase the toxicity one would set `SCAR.hook.mod_scaling = 100.0`. To modify nothing simply set `SCAR.hook.mod_features = None`.
|
54 |
|
|
|
62 |
|
63 |
The scripts for generating the training data are located in `./create_training_data`.
|
64 |
The training script is written for a Determined cluster but should be easily adaptable for other training frameworks. The corresponding script is located here `./llama3_SAE/determined_trails.py`.
|
65 |
+
Some of the evaluation functions are located in `./evaluations`.
|
66 |
|
67 |
# Citation
|
68 |
```bibtex
|