WASSA2024 Track 1,2,3 LLM based on LLama3-8B-instrcut (Pure LoRA Training)

This model is for WASSA2024 Track 1,2,3. It is fine-tuned on LLama3-8B-instrcut using standard prediction, role-play, and contrastive supervised fine-tune template. The learning rate for this model is 8e-5.

For training and usage details, please refer to the paper:

Licence

This repository's models are open-sourced under the Apache-2.0 license, and their weight usage must adhere to LLama3 MODEL LICENCE license.

Downloads last month
10
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for RicardoLee/WASSA2024_EmpathyDetection_Chinchunmei_EXP305

Quantizations
1 model

Collection including RicardoLee/WASSA2024_EmpathyDetection_Chinchunmei_EXP305