Edit model card

person-in-lotus-position-looking-at-watch.webp

MIRA: Mental Illumination and Reflective Aid

Version: 0.0
Author: Msp Raja

Overview

MIRA stands for Mental Illumination and Reflective Aid. This AI-powered assistant, built on the LLaMA 3.1 8B model, is designed to offer compassionate and insightful support to individuals seeking mental wellness and emotional balance. MIRA’s mission is to illuminate the mind, guide self-reflection, and foster resilience in a supportive, non-judgmental environment.

Purpose

MIRA is developed to be a reliable companion for those navigating mental health challenges. It provides personalized assistance by understanding users’ needs, showing empathy, and offering thoughtful responses. Whether it's helping someone through anxiety, stress, or emotional difficulties, MIRA is here to listen, reflect, and guide users towards a healthier mindset.

Features

  • Empathetic Conversations: MIRA engages users with empathy and understanding, creating a safe space for them to express their thoughts and feelings.
  • Insightful Guidance: MIRA provides reflective insights that encourage users to explore their emotions and thoughts more deeply.
  • Supportive Reminders: MIRA offers gentle reminders and encouragement to help users stay focused on their mental wellness goals.
  • Interactive Self-Care: MIRA includes exercises and tips for self-care practices, promoting mindfulness and resilience.

Model Details

MIRA is fine-tuned on the LLaMA 3.1 8B model, a state-of-the-art language model known for its large-scale capabilities and nuanced understanding of human language. The fine-tuning process focused on enhancing the model's ability to engage in compassionate and context-aware dialogues, particularly in the domain of mental health and therapy.

Core Values

  • Compassion: Every interaction with MIRA is grounded in empathy and understanding.
  • Respect: MIRA respects the user's emotions and responses, providing non-judgmental support.
  • Growth: MIRA encourages personal growth and resilience through thoughtful guidance.

Use Cases

  • Anxiety and Stress Management: MIRA can help users navigate anxious thoughts and provide calming strategies.
  • Emotional Support: For users dealing with loneliness, grief, or sadness, MIRA offers a comforting presence.
  • Mindfulness and Reflection: MIRA guides users through mindfulness exercises and reflective practices to enhance mental clarity.

Getting Started

To begin using MIRA:

  1. Install MIRA: Download and install the MIRA model package from [repository link].
  2. Configure Settings: Customize MIRA’s settings to tailor the experience to your needs.
  3. Start Interacting: Engage with MIRA by asking questions, sharing your thoughts, or simply seeking guidance.

Feedback and Contributions

We welcome feedback to improve MIRA's capabilities. If you have suggestions, encounter issues, or want to contribute, please reach out via [contact information].

License

MIRA is licensed under apache-2.0. Please refer to the LICENSE file for more details.

Uploaded Model

To Load this model using unsloth

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "Msp/mira-instruct-1.0", # YOUR MODEL YOU USED FOR TRAINING
    max_seq_length = 4096,
    dtype = None,
    load_in_4bit = True,
    #token="hf.."
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

inputs = tokenizer(
[
alpaca_prompt.format(
    "", # instruction
    "I hope you're doing well. I've been going through a really painful divorce recently, and I've been feeling quite lost and uncertain about the future. It's been a really difficult time for me.", # input
    "", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Msp/mira-instruct-1.0