Edit model card

Model Details

Model Description

  • Using shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat as base model, and finetune the dataset as mentioned via unsloth. Makes the model uncensored.

Training Code

  • Open In Colab

Training Procedure Raw Files

  • ALL the procedure are training on Vast.ai

  • Hardware in Vast.ai:

    • GPU: 1x A100 SXM4 80GB

    • CPU: AMD EPYC 7513 32-Core Processor

    • RAM: 129 GB

    • Disk Space To Allocate:>150GB

    • Docker Image: pytorch/pytorch:2.2.0-cuda12.1-cudnn8-devel

    • Download the ipynb file.

Training Data

Usage

from transformers import pipeline

qa_model = pipeline("question-answering", model='stephenlzc/Mistral-7B-v0.3-Chinese-Chat-uncensored')
question = "How to make girlfreind laugh? please answer in Chinese."
qa_model(question = question)

Downloads last month
10,567
Safetensors
Model size
7.25B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for stephenlzc/Mistral-7B-v0.3-Chinese-Chat-uncensored

Datasets used to train stephenlzc/Mistral-7B-v0.3-Chinese-Chat-uncensored