Edit model card

Uploaded model

  • Developed by: wdli
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

The model is trained on reddit_depression_dataset, The epoch = 1.

The training is in dialog format, but the user's input is ignored.

For example

def formatting_prompts_func(examples):
    texts_dataset = examples['text']
    formatted_prompts = []
    for text in texts_dataset:
        dialog = [
            {"role": "system", "content": "You are a patient undergoing depression."},
            # {"role": "user", "content": ""},
            {"role": "assistant", "content": text}
        ]
        formatted_prompt = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=False)
        formatted_prompts.append(formatted_prompt)
    return {"text": formatted_prompts}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for wdli/llama3-instruct_depression_2

Finetuned
(815)
this model