Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

TableLLM-7b - GGUF

Name Quant method Size
TableLLM-7b.Q2_K.gguf Q2_K 2.36GB
TableLLM-7b.IQ3_XS.gguf IQ3_XS 2.6GB
TableLLM-7b.IQ3_S.gguf IQ3_S 0.38GB
TableLLM-7b.Q3_K_S.gguf Q3_K_S 2.75GB
TableLLM-7b.IQ3_M.gguf IQ3_M 2.9GB
TableLLM-7b.Q3_K.gguf Q3_K 3.07GB
TableLLM-7b.Q3_K_M.gguf Q3_K_M 3.07GB
TableLLM-7b.Q3_K_L.gguf Q3_K_L 3.35GB
TableLLM-7b.IQ4_XS.gguf IQ4_XS 3.4GB
TableLLM-7b.Q4_0.gguf Q4_0 3.56GB
TableLLM-7b.IQ4_NL.gguf IQ4_NL 3.58GB
TableLLM-7b.Q4_K_S.gguf Q4_K_S 3.59GB
TableLLM-7b.Q4_K.gguf Q4_K 3.8GB
TableLLM-7b.Q4_K_M.gguf Q4_K_M 3.8GB
TableLLM-7b.Q4_1.gguf Q4_1 3.95GB
TableLLM-7b.Q5_0.gguf Q5_0 4.33GB
TableLLM-7b.Q5_K_S.gguf Q5_K_S 4.33GB
TableLLM-7b.Q5_K.gguf Q5_K 4.45GB
TableLLM-7b.Q5_K_M.gguf Q5_K_M 4.45GB
TableLLM-7b.Q5_1.gguf Q5_1 4.72GB
TableLLM-7b.Q6_K.gguf Q6_K 5.15GB
TableLLM-7b.Q8_0.gguf Q8_0 6.67GB

Original model description:

license: llama2 datasets: - RUCKBReasoning/TableLLM-SFT language: - en tags: - Table - QA - Code

TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios

| Paper | Training set | Github | Homepage |

We present TableLLM, a powerful large language model designed to handle tabular data manipulation tasks efficiently, whether they are embedded in spreadsheets or documents, meeting the demands of real office scenarios. The TableLLM series encompasses two distinct scales: TableLLM-7B and TableLLM-13B, which are fine-tuned based on CodeLlama-7B and 13B.

TableLLM generates either a code solution or a direct text answer to handle tabular data manipulation tasks based on different scenarios. Code generation is used for handling spreadsheet-embedded tabular data, which often involves the insert, delete, update, query, merge, and plot operations of tables. Text generation is used for handling document-embedded tabular data, which often involves the query operation of short tables.

Evaluation Results

We evaluate the code solution generation ability of TableLLM on three benchmarks: WikiSQL, Spider and Self-created table operation benchmark. The text answer generation ability is tested on four benchmarks: WikiTableQuestion (WikiTQ), TAT-QA, FeTaQA and OTTQA. The evaluation result is shown below:

Model WikiTQ TAT-QA FeTaQA OTTQA WikiSQL Spider Self-created Average
TaPEX 38.5 – – – 83.9 15.0 / 45.8
TaPas 31.5 – – – 74.2 23.1 / 42.92
TableLlama 24.0 22.2 20.5 6.4 43.7 9.0 / 20.7
GPT3.5 58.5 72.1 71.2 60.8 81.7 67.4 77.1 69.8
GPT4 74.1 77.1 78.4 69.5 84.0 69.5 77.8 75.8
Llama2-Chat (13B) 48.8 49.6 67.7 61.5 – – – 56.9
CodeLlama (13B) 43.4 47.2 57.2 49.7 38.3 21.9 47.6 43.6
Deepseek-Coder (33B) 6.5 11.0 7.1 7.4 72.5 58.4 73.9 33.8
StructGPT (GPT3.5) 52.5 27.5 11.8 14.0 67.8 84.8 / 48.9
Binder (GPT3.5) 61.6 12.8 6.8 5.1 78.6 52.6 / 42.5
DATER (GPT3.5) 53.4 28.4 18.3 13.0 58.2 26.5 / 37.0
TableLLM-7B (Ours) 58.8 66.9 72.6 63.1 86.6 82.6 78.8 72.8
TableLLM-13B (Ours) 62.4 68.2 74.5 62.5 90.7 83.4 80.8 74.7

Prompt Template

The prompts we used for generating code solutions and text answers are introduced below.

Code Solution

The prompt template for the insert, delete, update, query, and plot operations on a single table.

[INST]Below are the first few lines of a CSV file. You need to write a Python program to solve the provided question.

Header and first few lines of CSV file:
{csv_data}

Question: {question}[/INST]

The prompt template for the merge operation on two tables.

[INST]Below are the first few lines two CSV file. You need to write a Python program to solve the provided question.

Header and first few lines of CSV file 1:
{csv_data1}

Header and first few lines of CSV file 2:
{csv_data2}

Question: {question}[/INST]

The csv_data field is filled with the first few lines of your provided table file. Below is an example:

Sex,Length,Diameter,Height,Whole weight,Shucked weight,Viscera weight,Shell weight,Rings
M,0.455,0.365,0.095,0.514,0.2245,0.101,0.15,15
M,0.35,0.265,0.09,0.2255,0.0995,0.0485,0.07,7
F,0.53,0.42,0.135,0.677,0.2565,0.1415,0.21,9
M,0.44,0.365,0.125,0.516,0.2155,0.114,0.155,10
I,0.33,0.255,0.08,0.205,0.0895,0.0395,0.055,7

Text Answer

The prompt template for direct text answer generation on short tables.

[INST]Offer a thorough and accurate solution that directly addresses the Question outlined in the [Question].
### [Table Text]
{table_descriptions}

### [Table]
```
{table_in_csv}
```

### [Question]
{question}

### [Solution][INST/]

For more details about how to use TableLLM, please refer to our GitHub page: https://github.com/TableLLM/TableLLM

Downloads last month
127
GGUF
Model size
6.74B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .