Edit model card

Obsidian: Worlds smallest multi-modal LLM. First multi-modal model in size 3B

Model Name: Obsidian-3B-V0.5

Obsidian is a brand new series of Multimodal Language Models. This first project is led by Quan N. and Luigi D.(LDJ).

Obsidian-3B-V0.5 is a multi-modal AI model that has vision! it's smarts are built on Capybara-3B-V1.9 based on StableLM-3B-4e1t. Capybara-3B-V1.9 achieves state-of-the-art performance when compared to model with similar size, even beats some 7B models.

Current finetuning and inference code is available on our GitHub repo: Here


Obsidian-3B-V0.5 was developed and finetuned by Nous Research, in collaboration with Virtual Interactive. Special thank you to LDJ for the wonderful Capybara dataset, and qnguyen3 for the model training procedure.

Model Training

Obsidian-3B-V0.5 followed the same training procedure as LLaVA 1.5

Prompt Format

The model followed ChatML format. However, with ### as the seperator

What is this sign about?\n<image>
The sign is about bullying, and it is placed on a black background with a red background.


Coming Soon!


  title={Obsidian-3B: First Multi-modal below 7B Parameters.},
  author={Nguyen, Quan and Daniele},


  title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.},
  author={Daniele, Luigi and Suphavadeeprasit},
  journal={arXiv preprint arXiv:(comming soon)},
Downloads last month

Datasets used to train NousResearch/Obsidian-3B-V0.5

Spaces using NousResearch/Obsidian-3B-V0.5 3

Collections including NousResearch/Obsidian-3B-V0.5