Edit model card

Model Card for Model ID

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

Model Sources

Training Details

Training Data

DataBricks Instruction-Tuning Dataset (10% utilized)

Training Procedure

  1. Tokenize and label data
  2. Load LLM
  3. Apply Quantized Low-Rank Adaptation (QLoRA) to modules ["q_proj","k_proj","v_proj","o_proj"]
  4. Perform training with HuggingFace Trainer
  5. Use DataCollatorForSeq2Seq
    • Note that this was data collator was chosen over the DataCollatorForLanguageModeling as the latter overwrites pre-defined "labels"
    • This overwriting is done by the tf_mask_tokens and torch_mask_tokens functions for DataCollatorForLanguageModeling

Preprocessing

Utilized different instruction prompt templates for each category in the dataset.

open_qa
### Instruction:
Answer the question below. Be as specific and concise as possible.

### Question:
{instruction}

### Response:
{response}
general_qa
### Instruction:
Answer the question below to the best of your konwledge.

### Question:
{instruction}

### Response:
{response}
classification
### Instruction:
You will be given a question and a list of potential answers to that question. You are to select the correct answers out of the available choices.

### Question:
{instruction}

### Response:
{response}
closed_qa
### Instruction:
You will be given a question to answer and context that contains pertinent information. Provide a concise and accurate response to the question using the information provided in the context.

### Question:
{instruction}

### Context:
{context}

### Response:
{response}
brainstorming
### Instruction:
You will be given a question that does not have a correct answer. You are to brainstorm one possible answer to the provided question.

### Question:
{instruction}

### Response:
{response}
information_extraction
### Instruction:
You will be given a question or query and some context that can be used to answer it. You are to extract relevant information from the provided context to provide an accurate response to the given query.

### Question:
{instruction}

### Context:
{context}

### Response:
{response}
summarization
### Instruction:
You will be given a question or request and context that can be used for your response. You are to summarize the provided context to provide an answer to the question.

### Question:
{instruction}

### Context:
{context}

### Response:
{response}
creative_writing
### Instruction:
You will be given a prompt that you are to write about. Be creative.

### Prompt:
{instruction}

### Response:
{response}"""

Labelled Data Format

{
  'input_ids' : List[int],
  'attention_mask' : List[int],
  'labels' : List[int]
}

Where labels were created by masking everything but the "response" with the mask token (-100)

Hardware

Fine-tuning performed on Google Colab on a single session (T4). Dataset not fully utilized due to limitations of free session.

Downloads last month
10
Safetensors
Model size
631M params
Tensor type
F32
·
FP16
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Chahnwoo/TinyLlama-1.1B-Chat-v1.0-0.1E-QLoRA-Databricks-SFT-Test_20240729