Edit model card

Introduction

MoMo-72B-lora-1.8.4-DPO is trained via Direct Preference Optimization(DPO) from MoMo-72B-LoRA-V1.4 as its base model, with several optimizations in hyperparameters.
MoMo-72B-LoRA-V1.4 is trained via Supervised Fine-Tuning (SFT) using LoRA, with the QWEN-72B model as its base-model.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with llama.
MoMo-72B is trained using Moreh's MoAI platform, which simplifies the training of large-scale models, and AMD's MI250 GPU.

Details

Used Librarys

  • torch
  • peft

Used Datasets

Model ARC MMLU TruthfulQA GSM8K
V1.4(result < 0.1, %) TBU TBU TBU TBU

Used Environments

How to use

# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-lora-1.8.4-DPO")
model = AutoModelForCausalLM.from_pretrained(
    "moreh/MoMo-72B-lora-1.8.4-DPO"
)
Downloads last month
595
Safetensors
Model size
72.3B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for moreh/MoMo-72B-lora-1.8.4-DPO

Finetunes
1 model
Merges
1 model