AmelieSchreiber
commited on
Commit
•
ff181ef
1
Parent(s):
e4e92a1
Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,20 @@ using a `75/25` train/test split. It achieves an evaluation loss of `0.153365746
|
|
29 |
|
30 |
- PEFT 0.4.0
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
## Using the Model
|
33 |
|
34 |
To use, try running:
|
|
|
29 |
|
30 |
- PEFT 0.4.0
|
31 |
|
32 |
+
This model uses a LoRA configuration with the rank of the LoRA set to `32`. In particular, the configuration is:
|
33 |
+
|
34 |
+
```python
|
35 |
+
peft_config = LoraConfig(
|
36 |
+
task_type=TaskType.TOKEN_CLS,
|
37 |
+
inference_mode=False,
|
38 |
+
r=32,
|
39 |
+
lora_alpha=16,
|
40 |
+
target_modules=["query", "key", "value"],
|
41 |
+
lora_dropout=0.1,
|
42 |
+
bias="all"
|
43 |
+
)
|
44 |
+
```
|
45 |
+
|
46 |
## Using the Model
|
47 |
|
48 |
To use, try running:
|