File size: 1,358 Bytes
d3c2ae7 faa1af6 32ec1db faa1af6 a912e2f faa1af6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
license: mit
---
# ESM-2 QLoRA for Binding Site Prediction
In this model, we wanted to see how the performance metrics were effected by adapting additional weight matrices with QLoRA. This was
shown to be the most important hyperparameter for improvement in performance metrics by far, whereas hyperparameters such as rank and scaling
factor were shown to be negligible in importance, with lower rank being just as good as higher rank. So, we decided to test the difference between
simply using the query, key, and value weight matrix adapters to using adapters for all possible weight matrices. The comparison for the
first epoch can be seen below. Note the minor performance improvements for the model using every possible weight matrix (this model).
### This model
```python
Test (epoch 1):
'eval_loss': 0.41490185260772705,
'eval_accuracy': 0.8625347674451358,
'eval_precision': 0.11370668247419904,
'eval_recall': 0.7800926533683039,
'eval_f1': 0.19848246486644372,
'eval_auc': 0.8222331548742136,
'eval_mcc': 0.2639007297474409}
```
### Query, Key, Value only Model:
```python
Test (epoch 1):
{'eval_loss': 0.3398605287075043,
'eval_accuracy': 0.8557050926566265,
'eval_precision': 0.10792930844408741,
'eval_recall': 0.7726298654561553,
'eval_f1': 0.18940102955847055,
'eval_auc': 0.8150939843855006,
'eval_mcc': 0.2535956911257298}
``` |