Edit model card

AI Judge


Model Description

The advent of ChatGPT and GPT-4 have brought groundbreaking progress in the realm of natural language processing, with its astonishing generative capabilities. Nevertheless, the training and deployment of such large-scale language models are exceedingly costly. Furthermore, experience has shown that these models struggle to deliver satisfactory performance in specific domains, such as knowledge-intensive scenarios like jurisprudence. Common limitations include knowledge hallucinations, inability to accurately apply legal provisions, and generating overly vague content.

To alleviate the aforementioned challenges, we have trained a series of language models based on Chinese legal corpora, known as JurisLMs. These models have been further pre-trained on various types of legal documents, such as Chinese laws and regulations, consultations, and judgment document. AI Judge is one such model within the JurisLMs family, derived from the GPT-2 model that has further pre-training on legal judgment documents, combined with an article selection model (a BERT-based classifier) for fine-tuning, resulting in an explainable legal judgment model. Compared to existing models, AI Judge not only provides sentencing outcomes but also offers corresponding judicial perspectives.

Model Usage

import torch
from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline

fact_description = "1、2013年6月25日9时许,被告人丁某某在平阴县中医院建筑工地工人宿舍,窃取被害人胡某(男,43岁)现金1500元,在逃离现场时被工地工人抓获,丁某某将窃取的现金返还被害人。2、2013年7月12日14时许,被告人丁某某在平阴县府前街文鼎嘉苑建筑工地工人宿舍,窃取被害人陈某(男,31岁)及王某(男,25岁)现金850元,在逃跑时被抓获,丁某某将盗窃现金返还被害人。本院认为,"

model_name = "seudl/aijudge"

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
tokenizer = BertTokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name).to(device)
generator = TextGenerationPipeline(model, tokenizer, device=0)
generator.tokenizer.pad_token_id = generator.model.config.eos_token_id
prediction = generator(fact_description,
                                max_length=1024,
                                num_beams=1,
                                top_p=0.7,
                                num_return_sequences=1,
                                eos_token_id=50256,
                                pad_token_id=generator.model.config.eos_token_id)

court_view = prediction[0]["generated_text"].replace(" ", "").split("。本院认为,")[1].split("<生成结束>")[0]
print(court_view)

Comparison

For detailed comparisons, please refer to (JurisLMs)

Acknowledged Limitations

Despite being significantly ameliorated through professional annotation and evaluation, JurisGPT2 inevitably retains certain limitations, including but not limited to:

  • Potential oversight of crucial facts
  • Possible logical errors in multiple parties
  • Potential inaccuracies in conclusions
  • Possibility of outdated legal provisions

Disclaimer

This project is strictly for academic research purposes and is prohibited for commercial use. When utilizing third-party technologies, adhere to the corresponding open-source licenses. The accuracy of the content generated by this project is subject to factors such as algorithms, randomness, and quantification precision, and therefore, cannot be guaranteed. The project assumes no legal liability for any content produced by the model and shall not be held responsible for any damages resulting from the use of related resources and output. Due to the time constraints of the R&D group, timely technical support is unfortunately not feasible.

Downloads last month
59
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.