Generate Embeddings from OPT Models

#2
by SamuelEucker - opened

Hi,

I want to generate document embeddings from the opt models. Also I want to make sure that they are always of the same length (for a corpus at least). How can I achieve this?
Thanks!

Assuming that we have some tokens, I did the following:
vectorized_docs = list()
for i in range(len(tokens)):
vectorized_docs.append(self.model.generate(tokens[i]))
This way I get some vectorized representation of the tokens. However, the model stresses, that the max_length parameter needs to be carefully chosen. Once I set it high enough, the model wont complain, however, the vectorized_docs vectors are still not always the same length (or the max_length).

Any Comments are much appreciated!

Edit: I found out that model.generate generates the text + continued text and not the embedding. So the question remains, how do I get the embedding for a text of my choosing. Thanks!

Hey @SamuelEucker ,

Good question!

Would the following example fit your needs?

#!/usr/bin/env python3
from transformers import OPTForCausalLM, GPT2Tokenizer
import torch

tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-125m")
model = OPTForCausalLM.from_pretrained("facebook/opt-125m")

# begin tokens
start_tokens = torch.tensor(2 * [[[0]]])

for i in range(start_tokens.shape[-1]):
    out_tokens = model.generate(start_tokens[i])
    opt_embeddings = model.get_input_embeddings()
    # generated_embedding_vectors has shape [len(opt_embeddings), hidden_size]
    generated_embedding_vectors = opt_embeddings(out_tokens)[0]

Sign up or log in to comment