Overview
The chatbot has been fine-tuned on the PHR Therapy Dataset using LLaMA 3.2 3B Instruct, enhancing its ability to engage in meaningful and supportive conversations.
Features
- Empathetic Responses: Trained to understand and respond with emotional intelligence.
- Context Awareness: Retains context over multiple interactions.
- Mental Health Focus: Provides supportive and non-judgmental responses based on therapy-related dialogues.
- Efficient Inference: Optimized for deployment with reduced latency.
Model Fine-Tuning Details
- Base Model: LLaMA 3.2 3B Instruct
- Dataset: PHR Therapy Dataset (contains therapist-patient conversations for empathetic response generation)
- Fine-Tuning Framework: Unsloth (optimized training for efficiency)
- Training Environment: Local GPU / Cloud Instance (depending on available resources)
- Optimization Techniques:
- LoRA (Low-Rank Adaptation) for parameter-efficient tuning
- Mixed Precision Training for speed and memory efficiency
- Supervised Fine-Tuning (SFT) on therapist-patient interactions
Installation
Using ollama
ollama run hf.co/Ishan93/Fine_tuned_ver2
Usage
Using Google Colab or other notebooks
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="Ishan93/Fine_tuned_ver2",
filename="Fine_tuned_ver2.gguf",
)
- Downloads last month
- 5
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for Ishan93/Fine_tuned_ver2
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
unsloth/Llama-3.2-3B-Instruct