language:
- en
library_name: transformers
pipeline_tag: token-classification
This Model Contains Models and Architectures which i am building currently for the mistral model !
Such as a question/Answer head ... Token clasification head , Translation Head etc ... these can be called from the associated transformers model controller ... ie automodels !
even redirecting the output to other methods ... or intercepting the input of which these extra models do , as they may need special setup config ie set the number of required lables etc ! or add the context for the question and answer spearate from the question with the context as an input and the request asa seperate input allowing for larger contexts and more indepth prompt sizes as input windows can be increased with the cost of gpu !
best to stay in the window of existing models .. but try to implement the different neural net architecture even combining them if possible ... Currently Adding Token Classification to mistral model ;
Still need some kinks sorting out -on the way - Testing
! pip install flash_attn
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
# Load pre-trained model and tokenizer
model_name = "LeroyDyer/Mixtral_AI_TokenClassification"
tokenizer = AutoTokenizer.from_pretrained("LeroyDyer/Mixtral_AI_PsycoTron")
model = AutoModelForTokenClassification.from_pretrained(model_name,
trust_remote_code=True,
num_labels=3)
model
model.train
# Get loss or logits
# loss = outputs.loss
# logits = outputs.logits