b1sheng commited on
Commit
8832bf9
1 Parent(s): 412429c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -0
README.md CHANGED
@@ -1,3 +1,163 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - zh
5
+ tags:
6
+ - Legal QA
7
  ---
8
+
9
+ ## Project Description
10
+ <p align = "justify"> The advent of ChatGPT, specifically GPT-4, has engendered groundbreaking strides in the realm of natural language processing, with its generative capabilities inducing profound impressions. However, empirical observations suggest that these models often falter in specific domains, notably in knowledge-intensive areas such as law, where common limitations manifest as knowledge hallucinations, inability to accurately apply legal provisions, and the generation of excessively abstract content. </p>
11
+
12
+ <p align = "justify"> To mitigate the aforementioned challenges, we have trained a series of language models, namely JurisLMs, on Chinese legal corpora. These models have been further pretrained on diverse datasets including legislations, legal consultations, and judicial documents, tailored to distinct scenarios. Among these, AI Judge, a model fine-tuned after further pretraining of GPT-2 on legal corpora and combined with a <u>legal provision application model</u> (a classifier based on BERT), is an <font color=#FF000>explainable legal decision prediction model</font>. Existing decision making models typically yield predictions but fail to rationalize them. To address this, AI Judge not only provides verdict predictions but also corresponding court views. Leveraging a similar framework, we have trained an <font color=#FF000>intelligent legal consultation model</font>, AI Lawyer, based on Chinese LLaMA. Owing to the scarcity of consultation corpora annotated with legal provisions, we have employed <u>Active Learning</u> to fine-tune a <u>legal provision application model</u> on a limited dataset, enabling AI Lawyer to answer queries by correctly applying corresponding judicial perspectives.</p>
13
+
14
+ ## AI Lawyer Demo and Usage
15
+ <!---<div align=center><img src="./images/ailawyer_framework.png"></div>
16
+ <center style="font-size:14px;color:#C0C0C0;text-decoration:underline">AI Lawyer 框架</center>
17
+ <br>--->
18
+
19
+ ```python
20
+ #!/usr/bin/env python3
21
+ # -*- coding: utf-8 -*-
22
+ import torch
23
+ from peft import PeftModel
24
+ from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
25
+
26
+ def generate_prompt(instruction, input=None):
27
+ if input:
28
+ return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
29
+
30
+ ### Instruction:
31
+ {instruction}
32
+
33
+ ### Input:
34
+ {input}
35
+
36
+ ### Response:
37
+ """
38
+ else:
39
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
40
+
41
+ ### Instruction:
42
+ {instruction}
43
+
44
+ ### Response:
45
+ """
46
+
47
+ base_model = "save_merge_weight_directory"
48
+ lora_weights = "ailawyer_lora" # download from https://huggingface.co/openkg/ailawyer
49
+
50
+ instruction = "假设你是一名律师,请分析如下案例,并提供专业的法律服务。"
51
+ _input = "去年三月份包工头欠我和另外两个工友一共七万多元,然后一直拖着不给,也找不到人,或者是见面了就说没钱。现在要怎么做才能要到钱?"
52
+
53
+ tokenizer = LlamaTokenizer.from_pretrained(base_model)
54
+ model = LlamaForCausalLM.from_pretrained(base_model,
55
+ load_in_8bit=False,
56
+ torch_dtype=torch.float16,
57
+ device_map="auto")
58
+ model = PeftModel.from_pretrained(model, lora_weights, torch_dtype=torch.float16).half()
59
+
60
+ model.config.pad_token_id = tokenizer.pad_token_id = 0
61
+ model.config.bos_token_id = 1
62
+ model.config.eos_token_id = 2
63
+ model.eval()
64
+
65
+ prompt = generate_prompt(instruction, _input)
66
+ inputs = tokenizer(prompt, return_tensors="pt")
67
+
68
+ input_ids = inputs["input_ids"].to("cuda")
69
+ generation_config = GenerationConfig(temperature=0.1, top_p=0.75, top_k=1, num_beams=1)
70
+ with torch.no_grad():
71
+ generation_output = model.generate(
72
+ input_ids=input_ids,
73
+ generation_config=generation_config,
74
+ return_dict_in_generate=True,
75
+ output_scores=True,
76
+ max_new_tokens=500,
77
+ )
78
+ output_ids = generation_output.sequences[0]
79
+ output = tokenizer.decode(output_ids)
80
+ print(output.split("### Response:")[1].strip())
81
+
82
+ # Response: 根据《保障农民工工资支付条例》第十六条 用人单位拖欠农民工工资的,应当依法予以清偿。因此,拖欠农民工工资属于违法行为,劳动者有权要求用工单位承担工资清偿责任,建议劳动者收集拖欠工资的证据,比如合同书,工资欠条,与工地负责人通话录音,短信微信聊天记录,工友证人证言等向劳动监察大队举报,要求责令有关单位支付工资,也可以向法院起诉要求判决支付农民工工资。可以向法律援助中心申请免费的法律援助,指派法律援助律师代为诉讼维权,可以向12345政府服务热线投诉。</s>
83
+ ```
84
+
85
+
86
+ ## Environment
87
+ - RAM 30G+, GPU 32G+
88
+ - python>=3.9
89
+ - pip install -r requirements.txt
90
+
91
+ ## Model Merging
92
+
93
+ ### Step 1: Download the original LLaMa 13B
94
+ including:
95
+ - consolidated.*.pth
96
+ - tokenizer.model
97
+ - params.json
98
+
99
+
100
+ ### Step 2: Download Chinese-LLaMA-Alpaca 13B weights and save as chinese_llama_alpaca_lora_weight_directory
101
+ - HF:https://huggingface.co/ziqingyang/chinese-llama-lora-13b/tree/main
102
+ - Baidu Pan:https://pan.baidu.com/s/1BxFhYhDMipW7LwI58cGmQQ?pwd=ef3t
103
+
104
+ including:
105
+ adapter_config.json、adapter_model.bin、special_tokens_map.json、tokenizer.model、tokenizer_config.json
106
+
107
+ ### Step 3: Convert the original LLaMA to HF format
108
+
109
+ ```python
110
+ python convert_llama_weights_to_hf.py \
111
+ --input_dir origin_llama_weight_directory \
112
+ --model_size 13B \
113
+ --output_dir origin_llama_hf_weight_directory
114
+ ```
115
+ - input_dir: the original LLaMa directory
116
+ - output_dir: the directory where the converted LLaMA
117
+
118
+ ### Step 4: Merge LoRA weights to generate base model
119
+
120
+ ```python
121
+ python merge_llama_with_chinese_lora_to_hf.py \
122
+ --base_model origin_llama_hf_weight_directory \
123
+ --lora_model chinese_llama_alpaca_lora_weight_directory \
124
+ --output_dir save_merge_weight_directory
125
+ ```
126
+
127
+ - base_model:origin_llama_hf_weight_directory in Step 3
128
+ - lora_model:chinese_llama_alpaca_lora_weight_directory in Step 2
129
+ - output_dir:the directory where the full model weights
130
+
131
+
132
+
133
+ ## Deployment
134
+
135
+ Download the LoRA weights for this project and save as ailawyer_lora.
136
+
137
+ ### Web UI Deployment
138
+
139
+ Local deployment using Gradio Web UI, deployed on GPU 0 as follows:
140
+
141
+ ```shell
142
+ CUDA_VISIBLE_DEVICES=0 python web_demo_llama_13B.py \
143
+ --base_model save_merge_weight_directory \
144
+ --lora_weights ailawyer_lora
145
+ ```
146
+
147
+ - base_model save_merge_weight_directory in Step 4
148
+
149
+
150
+ ## Disclaimer
151
+ <p align = "justify"> This project is exclusively for academic research purposes and strictly prohibited for commercial use. The accuracy of the content generated by this project is subject to factors such as algorithms, randomness, and quantitative precision, hence difficult to guarantee. Although utmost efforts have been made to ensure the accuracy and timeliness of the data used, the characteristics of language models may still cause a lag in information and legal developments. Therefore, this project assumes no legal liability for any content output by the model, nor does it assume responsibility for any losses that may arise from the use of related resources and output results. Machines should not and cannot replace the process of seeking professional legal advice. In the event of specific legal issues or cases, it is recommended to consult a qualified lawyer or legal professional to obtain personalized advice. </p>
152
+
153
+
154
+
155
+
156
+
157
+
158
+
159
+
160
+
161
+
162
+
163
+