File size: 1,750 Bytes
d04fffc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
license: llama3.1
language:
- 'no'
- nb
- nn
---
### Model Overview
**NB-Llama-3.1-8B-Vision-Instruct** is part of the **NB-Llama-3.1** series of models, trained on top of [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). This multilingual generative model was fine-tuned specifically to support Norwegian Bokmål, Norwegian Nynorsk, and English, with partial support for Swedish and Danish.

The Instruct-model is trained using Supervised FineTuning (SFT) and then Direct Preference Optimalisation (DPO). The SFT training is based on synthetic datasets, the English [Magpie](https://huggingface.co/Magpie-Align) dataset and a translated/filtered version of this dataset. The DPO training is based on [Anthropics Helpful and Harmless](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset. The training is supposed to be fairly basic, giving the models a decent undertstanding of a chat template. 

The basic idea with this model series was to explore how current state-of-the-art models could be improved for Norwegian by training only on publicly available data. While these models are trained by the National Library of Norway, they do not include data only available through legal deposit. They do, however, contain public data like governmental reports that are both publicly available and legally deposited.

The Vision-Instruct model is highly experimental. It is using the Vision part of the meta-llama/Llama-3.2-8B-Vision-Instruct merged with the NB-version of the Llama-3.1 model. 

### Why do we release this model?
We are releasing this model because Unsloth did release [finetuning code for Llama-vision models](https://unsloth.ai/blog/vision). Please play around with it, and tell us the result.