File size: 1,144 Bytes
d127b01
 
87d5673
 
d127b01
87d5673
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
license: apache-2.0
language:
  - en
---

# **Introduction**
We introduce luxia-21.4b-alignment-v1.0, an instruction-tuned and alignment model based on luxia-21.4b.
Please refer to the evaluation results table for details.

# **Instruction Fine-tuning Strategy**
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO)

# **Data Contamination Test Results**
Results will be updated soon.

# **Evaluation Results**
Results will be updated soon.


# **Usage Instructions**

### **How to use**
```python
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("saltlux/luxia-21.4b-alignment-v0.1")
model = AutoModelForCausalLM.from_pretrained(
    "saltlux/luxia-21.4b-alignment-v0.1",
    device_map="auto",
    torch_dtype=torch.float16,
)
```

### **License**
- [saltlux/luxia-21.4b-alignment-v1.0](https://huggingface.co/saltlux/luxia-21.4b-alignment-v1.0): apache-2.0


### **Contact Us** ###
Any questions and suggestions are welcomed at the discussion tab.