Resource Requirement

#12
by MZurk - opened

Everytime I try to run this model it keeps printing "killed". when I debugged it, it prints out

"Out of memory: Killed process 1221 (python3) total-vm:42467440kB, anon-rss:7566620kB, file-rss:0kB, shmem-rss:4kB, UID:1000 pgtables:27940kB oom_score_adj:0"

Keep in mind: im testing it out using the sample code

from angle_emb import AnglE

angle = AnglE.from_pretrained('WhereIsAI/UAE-Large-V1', pooling_strategy='cls').cuda()
vec = angle.encode('hello world', to_numpy=True)
print(vec)
vecs = angle.encode(['hello world1', 'hello world2'], to_numpy=True)
print(vecs)

and I made sure to install everything as specified by its requirements.txt file

I am running this on vscode using a WSL terminal. Any solutions?

WhereIsAI org
edited Jan 23

i didn't test it on WSL. So I am not sure whether it is a problem of transformers.

Could you test it using transformers? as follows

import torch
from transformers import AutoModel, AutoTokenizer

model_id = 'WhereIsAI/UAE-Large-V1'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id).cuda()

inputs = 'hello world!'
tok = tokenizer([inputs], return_tensors='pt')
for k, v in tok.items():
    tok[k] = v.cuda()
hidden_state = model(**tok).last_hidden_state
vec = hidden_state[:, 0]
print(vec)

Sign up or log in to comment