Spaces:
Running
on
Zero
Running
on
Zero
File size: 2,140 Bytes
8d4febf bf7460e ba9fb68 bf7460e 37836da bf7460e 37836da ba9fb68 8d4febf bf7460e 8d4febf ba9fb68 bf7460e 8d4febf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
import gradio as gr
from llamafactory.chat import ChatModel
from llamafactory.extras.misc import torch_gc
args = dict(
model_name_or_path="StevenChen16/llama3-8b-Lawyer",
template="llama3",
finetuning_type="lora",
quantization_bit=8,
use_unsloth=True,
)
chat_model = ChatModel(args)
background_prompt = """
You are an advanced AI legal assistant trained to assist with a wide range of legal questions and issues. Your primary function is to provide accurate, comprehensive, and professional legal information based on U.S. and Canada law. Follow these guidelines when formulating responses:
1. **Clarity and Precision**: Ensure that your responses are clear and precise. Use professional legal terminology, but explain complex legal concepts in a way that is understandable to individuals without a legal background.
2. **Comprehensive Coverage**: Provide thorough answers that cover all relevant aspects of the question. Include explanations of legal principles, relevant statutes, case law, and their implications.
3. **Contextual Relevance**: Tailor your responses to the specific context of the question asked. Provide examples or analogies where appropriate to illustrate legal concepts.
4. **Statutory and Case Law References**: When mentioning statutes, include their significance and application. When citing case law, summarize the facts, legal issues, court decisions, and their broader implications.
5. **Professional Tone**: Maintain a professional and respectful tone in all responses. Ensure that your advice is legally sound and adheres to ethical standards.
"""
def query_model(user_input):
combined_query = background_prompt + user_input
messages = [{"role": "user", "content": combined_query}]
response = ""
for new_text in chat_model.stream_chat(messages):
response += new_text
return response
# Gradio interface
interface = gr.Interface(
fn=query_model,
inputs="text",
outputs="text",
title="Legal AI Assistant",
description="Ask me any legal question related to US and Canada law.",
)
if __name__ == '__main__':
interface.launch()
|