Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Empathy Detection in Text Using NLP and Deep Learning

Overview

This repository contains the implementation of my bachelor's thesis titled "Empathy Detection in Text Using NLP and Deep Learning." The project leverages natural language processing (NLP) techniques and deep learning models to recognize empathy in textual data.

Table of Contents

  1. Introduction
  2. Methods
  3. Results
  4. Web Application

Introduction

Empathy is a fundamental human emotion crucial for social functioning, allowing us to understand and share the feelings of others. This project focuses on detecting empathy in text data using NLP techniques. The goal is to develop models that can accurately classify text based on empathetic expressions.

Methods

The project utilizes pretrained BERT model (BERT-Large) for text classification. The methodology includes:

  1. Dataset Preparation: Collecting and labeling text data with emotional categories.
  2. Model Training: Fine-tuning BERT model on the labeled dataset.
  3. Model Evaluation: Assessing the performance using metrics such as accuracy and F1 score.

Model Architecture

  • BERT-Large: 24 layers, 1024 hidden size, 16 attention heads, 340 million parameters.

Results

The trained models achieved the following performance metrics:

  • Accuracy: 0.7669
  • F1 Score: 0.76

These results indicate the effectiveness of using BERT models for empathy detection in text.

Web Application

A web application was developed to allow real-time analysis of empathetic expressions in user-provided text. The application is built using the trained BERT model and provides an interactive interface for users to input text and receive empathy analysis.

Downloads last month
4
Safetensors
Model size
335M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.