File size: 1,960 Bytes
072a302 f8073db 072a302 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
language:
- fr
tags:
- token-classification
- fill-mask
license: mit
datasets:
- iit-cdip
---
This model is the combined camembert-base model, with the pretrained lilt checkpoint from the paper "LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding", with the visual backbone built from the pretrained checkpoint "microsoft/dit-base".
*Note:* This model should be fine-tuned, and loaded with the modeling and config files from the branch `improve-dit`.
Original repository: https://github.com/jpWang/LiLT
To use it, it is necessary to fork the modeling and configuration files from the original repository, and load the pretrained model from the corresponding classes (LiLTRobertaLikeVisionConfig, LiLTRobertaLikeVisionForRelationExtraction, LiLTRobertaLikeVisionForTokenClassification, LiLTRobertaLikeVisionModel).
They can also be preloaded with the AutoConfig/model factories as such:
```python
from transformers import AutoModelForTokenClassification, AutoConfig, AutoModel
from path_to_custom_classes import (
LiLTRobertaLikeVisionConfig,
LiLTRobertaLikeVisionForRelationExtraction,
LiLTRobertaLikeVisionForTokenClassification,
LiLTRobertaLikeVisionModel
)
def patch_transformers():
AutoConfig.register("liltrobertalike", LiLTRobertaLikeVisionConfig)
AutoModel.register(LiLTRobertaLikeVisionConfig, LiLTRobertaLikeVisionModel)
AutoModelForTokenClassification.register(LiLTRobertaLikeVisionConfig, LiLTRobertaLikeVisionForTokenClassification)
# etc...
```
To load the model, it is then possible to use:
```python
# patch_transformers() must have been executed beforehand
tokenizer = AutoTokenizer.from_pretrained("camembert-base")
model = AutoModel.from_pretrained("manu/lilt-camembert-dit-base-hf")
model = AutoModelForTokenClassification.from_pretrained("manu/lilt-camembert-dit-base-hf") # to be fine-tuned on a token classification task
``` |