metadata
license: other
license_name: helpingai
license_link: https://helpingai.co/license
pipeline_tag: text-generation
language:
- en
tags:
- HelpingAI
- Emotionally-Intelligent
- EQ-focused
- Conversational
- SLM
library_name: transformers
HelpingAI3
Model Description
HelpingAI3 is an advanced language model developed to excel in emotionally intelligent conversations. Building upon the foundations of HelpingAI2.5, this model offers enhanced emotional understanding and contextual awareness.
Model Details
- Developed by: HelpingAI
- Model type: Decoder-only large language model
- Language: English
- License: HelpingAI License
Training Data
HelpingAI3 was trained on a diverse dataset comprising:
- Emotional Dialogues: 15 million rows to enhance conversational intelligence.
- Therapeutic Exchanges: 3 million rows aimed at providing advanced emotional support.
- Cultural Conversations: 250,000 rows to improve global awareness.
- Crisis Response Scenarios: 1 million rows to better handle emergency situations.
Training Procedure
The model underwent the following training processes:
- Base Model: Initiated from HelpingAI2.5.
- Emotional Intelligence Training: Employed Reinforcement Learning for Emotion Understanding (RLEU) and context-aware conversational fine-tuning.
- Optimization: Utilized mixed-precision training and advanced token efficiency techniques.
Intended Use
HelpingAI3 is designed for:
- AI Companionship & Emotional Support: Offering empathetic interactions.
- Therapy & Wellbeing Guidance: Assisting in mental health support.
- Personalized Learning: Tailoring educational content to individual needs.
- Professional AI Assistance: Enhancing productivity in professional settings.
Limitations
While HelpingAI3 strives for high emotional intelligence, users should be aware of potential limitations:
- Biases: The model may inadvertently reflect biases present in the training data.
- Understanding Complex Emotions: There might be challenges in accurately interpreting nuanced human emotions.
- Not a Substitute for Professional Help: For serious emotional or psychological issues, consulting a qualified professional is recommended.
How to Use
Using Transformers
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the HelpingAI3 model
model = AutoModelForCausalLM.from_pretrained("HelpingAI/HelpingAI-3")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("HelpingAI/HelpingAI-3")
# Define the chat input
chat = [
{"role": "system", "content": "You are HelpingAI, an emotional AI. Always answer my questions in the HelpingAI style."},
{"role": "user", "content": "Introduce yourself."}
]
inputs = tokenizer.apply_chat_template(
chat,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate text
outputs = model.generate(
inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][inputs.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))