File size: 3,355 Bytes
0e46d25 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 c7cadb1 6208d41 0e46d25 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
---
license: apache-2.0
base_model:
- meta-llama/Llama-3.2-1B
pipeline_tag: question-answering
---
# CPU Compatible Mental Health Chatbot Model
This repository contains a fine-tuned LLaMA-based model designed for mental health counseling conversations. The model provides meaningful and empathetic responses to mental health-related queries. It is compatible with CPUs and systems with low RAM, making it accessible for a wide range of users.
---
## Features
- **Fine-tuned on Mental Health Counseling Conversations**: The model is trained using a dataset specifically curated for mental health support.
- **Low Resource Requirements**: Fully executable on systems with 15 GB RAM and CPU, no GPU required.
- **Pretrained on Meta's LLaMA 3.2 1B Model**: Builds on the strengths of the LLaMA architecture for high-quality responses.
- **Supports LoRA (Low-Rank Adaptation)**: Enables efficient fine-tuning with low computational overhead.
---
## Model Details
- **Base Model**: [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct)
- **Dataset**: [Amod/Mental Health Counseling Conversations](https://huggingface.co/datasets/Amod/mental_health_counseling_conversations)
- **Fine-Tuning Framework**: Hugging Face Transformers
---
## Installation
1. Clone the repository:
```bash
git clone https://huggingface.co/<your_hf_username>/mental-health-chatbot-model
cd mental-health-chatbot-model
```
2. Install the required packages:
```bash
pip install torch transformers datasets huggingface-hub
```
---
## Usage
### Load the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model and tokenizer
model_name = "<your_hf_username>/mental-health-chatbot-model"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Generate a response
input_text = "I feel anxious and don't know what to do."
inputs = tokenizer(input_text, return_tensors="pt")
response = model.generate(**inputs, max_length=256, pad_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(response[0], skip_special_tokens=True))
```
### Compatibility
This model can be run on:
- CPU-only systems
- Machines with as little as 15 GB RAM
---
## Fine-Tuning Instructions
To further fine-tune the model on your dataset:
1. Prepare your dataset in Hugging Face Dataset format.
2. Use the following script:
```python
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./fine_tuned_model",
per_device_train_batch_size=4,
num_train_epochs=3,
evaluation_strategy="epoch",
save_steps=500,
logging_dir="./logs",
learning_rate=5e-5,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=validation_dataset,
)
trainer.train()
```
---
## Model Performance
- **Training Epochs**: 3
- **Batch Size**: 4
- **Learning Rate**: 5e-5
- **Evaluation Strategy**: Epoch-wise
---
## License
This project is licensed under the [Apache 2.0 License](LICENSE).
---
## Acknowledgments
- [Meta](https://huggingface.co/meta-llama) for the LLaMA model
- [Hugging Face](https://huggingface.co/) for their open-source tools and datasets
- The creators of the Mental Health Counseling Conversations dataset |