Fill-Mask
Transformers
PyTorch
Joblib
Safetensors
DNA
biology
genomics
custom_code
Inference Endpoints
carlesonielfa commited on
Commit
82de94f
1 Parent(s): ac4ca3c

Update config.json

Browse files

Enabled initializing the model as a TokenClassification or SequenceClassification model for use in a downstream task.

Now using

```
model = AutoModelForTokenClassification.from_pretrained(model, trust_remote_code=True)
```
or

```
model = AutoModelForSequenceClassification.from_pretrained(model, trust_remote_code=True)
```
works, as it does for the NT-V1 models.

Was this functionality left out intentionally? I have tested this change with a fine-tuning Token Classification task with LoRa and seems to work fine.
If this change is desired, it should be integrated in all other NT-V2 models.



@hdallatorre

Files changed (1) hide show
  1. config.json +6 -2
config.json CHANGED
@@ -1,12 +1,16 @@
1
  {
2
  "add_bias_fnn": false,
3
  "architectures": [
4
- "EsmForMaskedLM"
 
 
5
  ],
6
  "attention_probs_dropout_prob": 0.0,
7
  "auto_map": {
8
  "AutoConfig": "esm_config.EsmConfig",
9
- "AutoModelForMaskedLM": "modeling_esm.EsmForMaskedLM"
 
 
10
  },
11
  "emb_layer_norm_before": false,
12
  "esmfold_config": null,
 
1
  {
2
  "add_bias_fnn": false,
3
  "architectures": [
4
+ "EsmForMaskedLM",
5
+ "EsmForTokenClassification",
6
+ "EsmForSequenceClassification"
7
  ],
8
  "attention_probs_dropout_prob": 0.0,
9
  "auto_map": {
10
  "AutoConfig": "esm_config.EsmConfig",
11
+ "AutoModelForMaskedLM": "modeling_esm.EsmForMaskedLM",
12
+ "AutoModelForTokenClassification": "modeling_esm.EsmForTokenClassification",
13
+ "AutoModelForSequenceClassification": "modeling_esm.EsmForSequenceClassification"
14
  },
15
  "emb_layer_norm_before": false,
16
  "esmfold_config": null,