getting error while importing KRISSBERT
I am running this code:
from transformers import AutoTokenizer, KRISSBERT
tokenizer = AutoTokenizer.from_pretrained("microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL")
model = KRISSBERT.from_pretrained("microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL")
But, I am getting this error:
ImportError: cannot import name 'KRISSBERT' from 'transformers' (/usr/local/lib/python3.8/dist-packages/transformers/init.py)
hi
@pradeepmohans
, I think the model repo comes with an example folder.
https://huggingface.co/microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL/tree/main/usage
The examples folder does doesn't show how to import KRISSBERT and use the embeddings as in here :
model = KRISSBERT.from_pretrained("microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL")
Some more information is needed.
I am particularly looking to call model.encode() to encode my sentence and get the embeddings.
Have you checked out this code snippet? https://huggingface.co/microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL/blob/main/usage/run_entity_linking.py#L368
@shengz Let me try this can you confirm whether the model and path name is correct:
tokenizer = AutoTokenizer.from_pretrained(
"microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL",
use_fast=True,
)
encoder = AutoModel.from_pretrained(
"microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL"
)
and i will be importing this :
from transformers import (
set_seed,
AutoConfig,
AutoTokenizer,
AutoModel,
PreTrainedTokenizer,
)
correct ?
I was able to launch the model. However, when I try to run:
v2_embeddings = encoder.generate(v2_input_ids,max_length=127)
I get the following error:
AttributeError Traceback (most recent call last)
in
----> 1 v2_embeddings = encoder.generate(v2_input_ids,max_length=127)
2 frames
/usr/local/lib/python3.8/dist-packages/transformers/generation_utils.py in greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
1539 continue # don't waste resources running the code we don't need
1540
-> 1541 next_token_logits = outputs.logits[:, -1, :]
1542
1543 # Store scores, attentions and hidden_states when required
AttributeError: 'BaseModelOutputWithPoolingAndCrossAttentions' object has no attribute 'logits'