# TADBot ## Overview TADBot is small language model that is trained on the nbertagnolli/counsel-chat dataset. It is a fine-tuned version of the Gemma 2 2B, which is a small language model with 2 billion parameters. TADBot is designed to assist people deal with mental problems and offer them advice based on the context of the conversation. It is not intended to replace professional mental health care, but rather to provide a supportive and empathetic resource for those who may be struggling with mental health issues. TADBot is still in development and is not yet available for public use. ## Technology used - Gemma 2 2B: A small language model with 2 billion parameters that TADBot is fine-tuned on. - nbertagnolli/counsel-chat: The dataset used to train TADBot on mental health and advice-giving tasks. - Hugging Face Transformers: A library used to fine-tune the Gemma 2 2B model on the nbertagnolli/counsel-chat dataset. - PyTorch: A library used for training and fine-tuning the language model. - Flask: A library used to create a server for TADBot. - Raspberry Pi: A small, low-cost computer used to host Test to Speech and Speech To Text models and TADBot server. - FER: A deep learning model used to detect emotions from faces in real-time using a webcam. - S2T and T2S: Speech to Text and Text to Speech models used to convert speech to text and text to speech respectively. # Features ## FER Model: - TADBot uses a deep learning model to detect emotions from faces in real-time using a webcam. This allows TADBot to better understand the emotional context of a conversation and provide more appropriate and empathetic responses. - The Data from the FER model is sent to the TADBot server and is used to identify the emotion from the image sent by the client. This information is then used to generate a more appropriate response from the model. - The data is also logged seperatly in a text file which can be accessed by the client to track the change in emotion during the conversation. This can be used to provide insights into the conversation. - The Data is not collected and erased after every conversation adhereing to the doctor-client privacy > HLD for the FER model ```mermaid flowchart TD %% User Interface Layer A[Raspberry PI] -->|Sends Image| B[detecfaces.py] B --->|Returns processed data| A %%Server subgraph Server %% Processing Layer B --> |Captured Image| T1[prediction.py] M1[RAFDB trained model] --> |epoch with best acc 92%|B T1-->|Top 3 emotions predicted| B %%Model Layer M1 %% Processing subgraph Processing T1 --> |Send Image|T2[detec_faces] T2 --> |Returns a 224x224 face|T1 end end ``` ## S2T Model and T2S Model: # How It Works ## Model TADBot uses a fine-tuned version of the Gemma 2 2B language model to generate responses. The model is trained on the nbertagnolli/counsel-chat dataset from hugging face, which contains conversations between mental health professionals and clients. The model is fine-tuned using the Hugging Face Transformers library and PyTorch. ### Dataset The raw version of the dataset consists of 2275 conversation taken from an online mental health platform. - # Implementation ## Deployment Instructions To deploy TADBot locally, you will need to follow these steps: - create a virtual environment(preferrablly python 3.11.10) with pip-tools or uv installed and install the required dependencies ``` pip sync requirements.txt #if u are using pip-tools pip install -r requirements.txt #if u are using pip uv sync #if u are using uv ```