|
--- |
|
|
|
|
|
{} |
|
--- |
|
|
|
<!-- # Model Card for Model ID --> |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
<!-- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). --> |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
Longformer-Encoder-Decoder (LED) model is designed to address the challenge of summarizing lengthy English texts, particularly legal documents. Utilizing a local-global attention mechanism, LED is capable of handling longer input sequences efficiently, making it highly suitable for legal document summarization tasks. |
|
|
|
## Citation |
|
|
|
**BibTeX:** |
|
``` |
|
@misc{duc2023led, |
|
author = {Chu Đình Đức}, |
|
title = {Longformer-Encoder-Decoder for Legal Document Summarization}, |
|
year = {2023}, |
|
} |
|
``` |