SauravMaheshkar's picture
docs: update model card
2413fca verified
|
raw
history blame
977 Bytes
metadata
license: mit
datasets:
  - conll2003
language:
  - en
metrics:
  - f1
library_name: peft
pipeline_tag: token-classification
tags:
  - unsloth
  - llama-2

At the moment of writing the 🤗 transformers library doesn't have a Llama implementation for Token Classification (although there is a open PR).

This model is based on a implementation by community member @KoichiYasuoka.

  • Base Model: unsloth/llama-2-7b-bnb-4bit
  • LORA Model Adaptation with rank 8 and alpha 32, other adapter configurations can be found in adapter_config.json

This model was only trained for a single epoch, however a notebook is made available for those who want to train on other datasets for longer.