Edit model card

Introduction

We introduce luxia-21.4b-alignment-v1.0, an instruction-tuned and alignment model based on luxia-21.4b. Please refer to the evaluation results table for details.

Instruction Fine-tuning Strategy

We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO)

Data Contamination Test Results

Results will be updated soon.

Evaluation Results

Results will be updated soon.

Usage Instructions

How to use

# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("saltlux/luxia-21.4b-alignment-v0.1")
model = AutoModelForCausalLM.from_pretrained(
    "saltlux/luxia-21.4b-alignment-v0.1",
    device_map="auto",
    torch_dtype=torch.float16,
)

License

Contact Us

Any questions and suggestions are welcomed at the discussion tab.

Downloads last month
8,305
Safetensors
Model size
21.4B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for saltlux/luxia-21.4b-alignment-v1.0

Quantizations
2 models

Space using saltlux/luxia-21.4b-alignment-v1.0 1