This my first attempt at finetuning the Qwen 1.8B model on samantha dataset. I've added a bit of my own stuff as well. Give it a go and tell me what you think.

Example:

from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig

tokenizer = AutoTokenizer.from_pretrained("qwen-1_8b-samantha-chat", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("qwen-1_8b-samantha-chat", device_map="auto", trust_remote_code=True).eval()
response, history = model.chat(tokenizer, "How can we take care of our mental health every day?", history=None)
print(response)

Response:

It's important to remember that everyone's needs and circumstances are different, so some strategies may work better for one person than another. However, there are several simple things you can do to help improve your mental well-being:

1. Establish healthy boundaries and prioritize self-care activities like exercise, meditation, or spending time with friends.
2. Practice active listening and empathize with others, as it can foster strong connections.
3. Build positive relationships by being open-minded and supportive in your interactions.
4. Engage in activities that bring you joy, such as creative pursuits, hobbies, or exploring new places.

By implementing these tips, you'll be able to maintain an optimal balance between personal development and your overall emotional well-being.
Downloads last month
22
Safetensors
Model size
1.84B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train aloobun/qwen-1_8b-samantha-chat

Collection including aloobun/qwen-1_8b-samantha-chat