RIMA HAZRA

rimahazra

AI & ML interests

AI and Safety, AI Hallucinations, Natural Language Processing, Information Retrieval, Large Language Models.

Recent Activity

authored a paper 6 days ago
upvoted a paper 9 days ago
updated a collection 9 days ago
AI and Safety

Organizations

Posts 1

view post
Post
757
šŸ”„ šŸ”„ Releasing our new paper on AI safety alignment -- Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations šŸŽÆ with Sayan Layek, Somnath Banerjee and Soujanya Poria.

šŸ‘‰ We propose Safety Arithmetic, a training-free framework enhancing LLM safety across different scenarios: Base models, Supervised fine-tuned models (SFT), and Edited models. Safety Arithmetic involves Harm Direction Removal (HDR) to avoid harmful content and Safety Alignment to promote safe responses.

šŸ‘‰ Paper: https://arxiv.org/abs/2406.11801v1
šŸ‘‰ Code: https://github.com/declare-lab/safety-arithmetic

models

None public yet

datasets

None public yet