Edit model card

LokPalAI: Bridging the Gap to Legal Empowerment

LokPalAI is an advanced language model finetuned for Indian scenarios, specifically designed to bridge the gap between individuals and legal empowerment. With LokPalAI, users can interact with a powerful query box to seek information and guidance related to Indian law.

Features:

  1. Interact with LokPalAI’s Query Box: LokPalAI provides a user-friendly query box interface where users can input their legal queries and receive accurate and relevant responses. Whether you need information about a specific law, legal procedure, or any other legal matter, LokPalAI is here to assist you.
  2. Enhanced with Rail Guards: To ensure the accuracy and reliability of the information provided, LokPalAI incorporates rail guards. These safeguards help prevent the generation of misleading or incorrect legal advice. We understand the importance of reliable legal information, and our rail guards are designed to maintain the highest standards of accuracy.
  3. Real-Time Responses using RAG: LokPalAI leverages the Retrieve and Generate (RAG) framework to provide real-time responses to your legal queries. RAG combines the power of retrieval-based models with generation-based models, ensuring that the information provided is both contextually relevant and up to date.
  4. Thorough Testing and Maintenance: We understand the criticality of maintaining a reliable and accurate legal information system. LokPalAI undergoes extensive testing to ensure its performance and reliability. We continuously monitor and update the model to account for changes in Indian law, ensuring that the information provided is always accurate and up to date.

✨ LokpalGPT-Instruct-Falcon-7b

Dataset

The dataset is being curated and created using judgements available in IndianKanoon.com. You can refer the whole process here. Soon, we will be releasing our dataset and the training process.

How to Use for Inference ?

💥 Falcon LLMs require PyTorch 2.0 for use with transformers!

For fast inference with Falcon, check-out Text Generation Inference! Read more in this blogpost.

You will need at least 16GB of memory to swiftly run inference with LokpalGPT-Instruct-Falcon-7b.

from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch

model = "lokpalai/lokpalgpt-falcon-7b-lora-4.5"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
    device_map="auto",
)
sequences = pipeline(
   "Can you analyze the legal implications of the Ayodhya Verdict by the Supreme Court of India?",
    max_length=200,
    do_sample=True,
    top_k=10,
    num_return_sequences=1,
    temperature=0.5,
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")
Downloads last month
3
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.