wdli's picture
Update README.md
b504c7a verified
metadata
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl

Uploaded model

  • Developed by: wdli
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

The model is trained on reddit_depression_dataset, The epoch = 1.

The training is in dialog format, but the user's input is ignored.

For example

def formatting_prompts_func(examples):
    texts_dataset = examples['text']
    formatted_prompts = []
    for text in texts_dataset:
        dialog = [
            {"role": "system", "content": "You are a patient undergoing depression."},
            # {"role": "user", "content": ""},
            {"role": "assistant", "content": text}
        ]
        formatted_prompt = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=False)
        formatted_prompts.append(formatted_prompt)
    return {"text": formatted_prompts}