metadata
license: apache-2.0
datasets:
- yuan-tian/chartgpt-dataset-llama3
language:
- en
metrics:
- rouge
pipeline_tag: text2text-generation
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- text-generation-inference
Model Card for ChartGPT-Llama3
Model Details
Model Description
This model is used to generate charts from natural language. For more information, please refer to the paper.
- Model type: Language model
- Language(s) (NLP): English
- License: Apache 2.0
- Finetuned from model: Meta-Llama-3-8B-Instruct
- Research paper: ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language
Model Input Format
Click to expand
Model input on the Step x
.
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Your response should follow the following format:
{Step 1 prompt}
{Step x-1 prompt}
{Step x prompt}
### Instruction:
{instruction}
### Input:
Table Name: {table name}
Table Header: {column names}
Table Header Type: {column types}
Table Data Example:
{data row 1}
{data row 2}
Previous Answer:
{previous answer}
### Response:
And the model should output the answer corresponding to step x
.
The step 1-6 prompts are as follows:
Step 1. Select the columns:
Step 2. Filter the data:
Step 3. Add aggregate functions:
Step 4. Choose chart type:
Step 5. Select encodings:
Step 6. Sort the data:
How to Get Started with the Model
Running the Model on a GPU
An example of a movie dataset with an instruction "Give me a visual representation of the faculty members by their professional status.". The model should give the answers to all steps. You can use the code below to test if you can run the model successfully.
Click to expand
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
)
tokenizer = AutoTokenizer.from_pretrained("yuan-tian/chartgpt-llama3")
model = AutoModelForCausalLM.from_pretrained("yuan-tian/chartgpt-llama3", device_map="auto")
input_text = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Your response should follow the following format:
Step 1. Select the columns:
Step 2. Filter the data:
Step 3. Add aggregate functions:
Step 4. Choose chart type:
Step 5. Select encodings:
Step 6. Sort the data:
### Instruction:
Give me a visual representation of the faculty members by their professional status.
### Input:
Table Name: Faculty
Table Header: FacID,Lname,Fname,Rank,Sex,Phone,Room,Building
Table Header Type: quantitative,nominal,nominal,nominal,nominal,quantitative,nominal,nominal
Table Data Example:
1082,Giuliano,Mark,Instructor,M,2424,224,NEB
1121,Goodrich,Michael,Professor,M,3593,219,NEB
Previous Answer:
### Response:"""
inputs = tokenizer(input_text, return_tensors="pt", padding=True).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens = True))
Training Details
Training Data
This model is Fine-tuned from Meta-Llama-3-8B-Instruct on the chartgpt-dataset-llama3.
Training Procedure
Plan to update the preprocessing and training procedure in the future.
Citation
BibTeX:
@article{tian2024chartgpt,
title={ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language},
author={Tian, Yuan and Cui, Weiwei and Deng, Dazhen and Yi, Xinjing and Yang, Yurun and Zhang, Haidong and Wu, Yingcai},
journal={IEEE Transactions on Visualization and Computer Graphics},
year={2024},
pages={1-15},
doi={10.1109/TVCG.2024.3368621}
}