harpomaxx's picture
Update README.md
2920594
|
raw
history blame
1.88 kB
metadata
license: openrail
datasets:
  - lucasmccabe-lmi/CodeAlpaca-20k
language:
  - en
library_name: adapter-transformers

Model Card for opt350m-codealpaca20k

Model Description

A simple opt350m model trained on the CodeAlpaca dataset using quantization and Progressive Embedding Fine-Tuning (PEFT). It's designed to understand and generate code-related responses based on the prompts provided.

Model Architecture

  • Base Model: facebook/opt-350m
  • Fine-tuning: Progressive Embedding Fine-Tuning (PEFT)

Training Data

The model was trained on the lucasmccabe-lmi/CodeAlpaca-20k dataset. This dataset contains code-related prompts and their corresponding outputs.

Training Procedure

Quantization Configuration:

  • Quantization Type: 4-bit
  • Compute Dtype: float16
  • Double Quant: Enabled

PEFT Configuration:

  • Lora Alpha: 16
  • Lora Dropout: 0.5
  • Bias: None
  • Task Type: CAUSAL_LM
  • Target Modules: q_proj, v_proj, k_proj

Training Arguments:

  • Output Directory: ./results
  • Batch Size: 4 (per device)
  • Gradient Accumulation Steps: 2
  • Number of Epochs: 10
  • Optimizer: adamw_bnb_8bit
  • Learning Rate: 2e-5
  • Max Gradient Norm: 0.3
  • Warmup Ratio: 0.03
  • Learning Rate Scheduler: Cosine
  • Logging Steps: 10
  • Save Steps: 250
  • FP16 Precision: Enabled

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("facebook/opt350m")
model = AutoModelForCausalLM.from_pretrained("harpomaxx/opt350m-codealpaca20k)

prompt = "### Question: [Your code-related question here]"
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(inputs)
decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(decoded_output)

License

OpenRail