Edit model card

tinyllama-colorist-lora-v0.3

image/png

This model, tinyllama-colorist-lora-v0.3, is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v0.3 on the color dataset.

Study Motivation

To study this new TinyLlama model as a replacement for Llama2 for resource-constrained environment. Also, in the future I will perform the Fine-Tuning of this model for Chat and for a specific domain in Portuguese and Spanish 🤗.

Prompt format

The model training process is similar to the regular Llama2 model with a chat prompt format like this:

<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant\n{answer}<|im_end|>\n

Instructions for use

User Input: Give me a sky blue color.
LLM response: #6092ff

Model usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from transformers import pipeline

def print_color_space(hex_color):
    def hex_to_rgb(hex_color):
        hex_color = hex_color.lstrip('#')
        return tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
    r, g, b = hex_to_rgb(hex_color)
    print(f'{hex_color}: \033[48;2;{r};{g};{b}m           \033[0m')

tokenizer = AutoTokenizer.from_pretrained(model_id_colorist_final)
pipe = pipeline(
    "text-generation",
    model=model_id_colorist_final,
    torch_dtype=torch.float16,
    device_map="auto",
)

from time import perf_counter
start_time = perf_counter()

prompt = formatted_prompt('give me a pure brown color')
sequences = pipe(
    prompt,
    do_sample=True,
    temperature=0.1,
    top_p=0.9,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_new_tokens=12
)
for seq in sequences:
    print(f"Result: {seq['generated_text']}")

output_time = perf_counter() - start_time
print(f"Time taken for inference: {round(output_time,2)} seconds")

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • training_steps: 200
  • mixed_precision_training: Native AMP

Training results

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
1
Safetensors
Model size
1.1B params
Tensor type
FP16
·
Unable to determine this model’s pipeline type. Check the docs .

Adapter for