# SEBIS /legal_t5_small_summ_sv

YAML Metadata Error: "language" must only contain lowercase characters
YAML Metadata Error: "language" with value "Swedish" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.

# legal_t5_small_summ_sv model

Model for Summarization of legal text written in Swedish. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis.

## Model description

legal_t5_small_summ_sv is based on the t5-small model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using dmodel = 512, dff = 2,048, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.

## Intended uses & limitations

The model could be used for summarization of legal texts written in Swedish.

### How to use

Here is how to use this model to summarize legal text written in Swedish in PyTorch:

from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline

pipeline = TranslationPipeline(
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_sv", do_lower_case=False,
skip_special_tokens=True),
device=0
)

pipeline([sv_text], max_length=512)


## Training data

The legal_t5_small_summ_sv model was trained on JRC-ACQUIS dataset consisting of 19 Thousand texts.

## Training procedure

The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.

### Preprocessing

An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.

## Evaluation results

When the model is used for classification test dataset, achieves the following results:

Test results :

Model Rouge1 Rouge2 Rouge Lsum
legal_t5_small_summ_sv 78.84 69.97 77.59

### BibTeX entry and citation info

Created by Ahmed Elnaggar/@Elnaggar_AI | LinkedIn