Usage

#1
by Maverick17 - opened

How to use this model correclty?

My current code looks like the following:

from transformers import AutoTokenizer, AutoModelForTokenClassification, AutoConfig

config = AutoConfig.from_pretrained("tli8hf/robertabase-structured-tuning-srl-conll2012")
tokenizer = AutoTokenizer.from_pretrained("tli8hf/robertabase-structured-tuning-srl-conll2012")
model = AutoModelForTokenClassification.from_pretrained("tli8hf/robertabase-structured-tuning-srl-conll2012", device_map="auto")

query = "The keys, which were needed to access the building, were locked in the car."
inputs = tokenizer(query, return_tensors="pt")
outputs = model(**inputs)
predicted_labels = outputs.logits.argmax(dim=2).squeeze().tolist() # Get the predicted labels

texts = [tokenizer.decode(token) for token in inputs.input_ids[0]]
tags = [config.label_map_inv[str(label_id)] for label_id in predicted_labels]

print(texts)
print(tags)

Hey, sorry the model is not a standalone thing. Did you try the "demo" section in this repo?

Let me know if you run into other errors.

Tao

Hi Tao,

thanks for the info! I tried it out and it works on Linux, but not on Windows due to BitsAndBytes package...

Why do you guys implement this SRL stuff using BERT-based models? Of course, this models are pretty much capable of capturing the language context very well, but whatabout the latest LLMs like LLaMas? Why nobody fine tune them in order to predict the semantic roles?

NLP at University of Utah org

Great to hear that.

Back then (2019-2020 which is time of our paper), RoBERTa was one of the best model for classification tasks. I agree that recent generative models could be a stronger option. I bet many folks probably already tried it. There is definitely something interesting there.

Maverick17 changed discussion status to closed

Sign up or log in to comment