This repository contains the TabuLa-8B (Tabular Llama-8B) model. TabuLa-8B is a foundation model for prediction (classification and binned regression) on tabular data.

TabuLa-8B is described in the paper "Large Scale Transfer Learning for Tabular Data via Language Modeling."

For more details on the model, see the paper, which includes a Model Card detailing the model architecture, training, and evaluation. TabuLa-8B was trained with rtfm, using the T4 dataset.

TabuLa-8B is built with Meta Llama 3.

Usage and Examples

You can load the model with transformers via

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("mlfoundations/tabula-8b")
model = AutoModelForCausalLM.from_pretrained("mlfoundations/tabula-8b")

For more information on how to prepare data and run inference (including a demo notebook for performing inference on your data), see the examples in rtfm.

License and Terms of Use

TabuLa-8B is fine-tuned from the Llama-3 8B model. As a result, we release it under the Llama 3 license, and by using the model you agree to abide by the Llama 3 Community License Agreement and the Llama 3 Acceptable Use Policy.

Downloads last month
9,196
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mlfoundations/tabula-8b

Finetunes
9 models
Quantizations
6 models

Dataset used to train mlfoundations/tabula-8b

Collection including mlfoundations/tabula-8b