Edit model card

*DISCLAIMER: This model is trained on a subset of the dataset. In particular, it is trained on th first 60 articles of the Italian Civil Code's book 2.

Abstract

Modeling law search and retrieval as prediction problems has recently emerged as a predominant approach in law intelligence. Focusing on the law article retrieval task, we present a deep learning framework named LamBERTa, which is designed for civil-law codes, and specifically trained on the Italian civil code. To our knowledge, this is the first study proposing an advanced approach to law article prediction for the Italian legal system based on a BERT (Bidirectional Encoder Representations from Transformers) learning framework, which has recently attracted increased attention among deep learning approaches, showing outstanding effectiveness in several natural language processing and learning tasks. We define LamBERTa models by fine-tuning an Italian pre-trained BERT on the Italian civil code or its portions, for law article retrieval as a classification task. One key aspect of our LamBERTa framework is that we conceived it to address an extreme classification scenario, which is characterized by a high number of classes, the few-shot learning problem, and the lack of test query benchmarks for Italian legal prediction tasks. To solve such issues, we define different methods for the unsupervised labeling of the law articles, which can in principle be applied to any law article code system. We provide insights into the explainability and interpretability of our LamBERTa models, and we present an extensive experimental analysis over query sets of different type, for single-label as well as multi-label evaluation tasks. Empirical evidence has shown the effectiveness of LamBERTa, and also its superiority against widely used deep-learning text classifiers and a few-shot learner conceived for an attribute-aware prediction task.

LamBERTa: A Deep Learning Framework for Law Article Retrieval

image/webp

BibTeX Entry and Citation Info

@article{Lamberta,
  author    = {Andrea Tagarelli and Andrea Simeri},
  title     = {{Unsupervised law article mining based on deep pre-trained language representation models with application to the Italian civil code}},
  journal   = {Artif. Intell. Law},
  volume    = {30(3)}, 
  pages     = {417--473. Published: 15 September 2021},
  year      = {2022}, 
  doi ={10.1007/s10506-021-09301-8}
}

References

  • Tagarelli, A., Simeri, A. Unsupervised law article mining based on deep pre-trained language representation models with application to the Italian civil code. Artif Intell Law 30, 417–473 (2022). https://doi.org/10.1007/s10506-021-09301-8

Downloads last month
10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train AndreaSimeri/LamBERTa_v5

Space using AndreaSimeri/LamBERTa_v5 1

Collection including AndreaSimeri/LamBERTa_v5