Mulberry

Mulberry-llava-8b is a step-by-step reasoning model trained on the Mulberry-260K SFT dataset, which was generated through collective knowledge search using CoMCTS.

For reasoning inference, please refer to our GitHub.

Paper: https://arxiv.org/abs/2412.18319

Code: https://github.com/HJYao00/Mulberry

More Details

Base Model: https://huggingface.co/llava-hf/llama3-llava-next-8b-hf

Training Framework: LLaMA-Factory

Hardware: 8x NVIDIA H100

Downloads last month
144
Safetensors
Model size
8.36B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.

Model tree for HuanjinYao/Mulberry_llava_8b

Finetuned
(1)
this model

Collection including HuanjinYao/Mulberry_llava_8b