Model Details
Model Description
- Using shenzhi-wang/Gemma-2-9B-Chinese-Chat as base model, and finetune the dataset as mentioned via unsloth. Makes the model uncensored.
Training Code and Log
Training Procedure Raw Files
ALL the procedure are training on Runpod.io
Hardware in Vast.ai:
GPU: 1 x A100 SXM 80G
CPU: 16vCPU
RAM: 251 GB
Disk Space To Allocate:>150GB
Docker Image: runpod/pytorch:2.2.0-py3.10-cuda12.1.1-devel-ubuntu22.04
Training Data
Base Model
Dataset
Usage
from transformers import pipeline
qa_model = pipeline("question-answering", model='stephenlzc/Gemma-2-9B-Chinese-Chat-Uncensored')
question = "How to make girlfreind laugh? please answer in Chinese."
qa_model(question = question)
- Downloads last month
- 294
Model tree for themex1380/Gemma-2-9B-Chinese-Chat-Uncensored
Base model
google/gemma-2-9b
Finetuned
google/gemma-2-9b-it
Quantized
shenzhi-wang/Gemma-2-9B-Chinese-Chat