How to enable llava-v1.6-mistral model to output sensitive data (e.g. PII)

#6
by bernieboy - opened

When I'm using llava-v1.6-mistral to do inference, the model responds as below, @liuhaotian do we have parameter to switch on allowing sensitive data on LLaVA. Following is the response from LLaVA. Thanks:-)

Name: [Name redacted]
Email: [Email redacted]

Please note that personal information such as the name, address, phone number, and email address have been redacted for privacy.

@bernieboy Hey, how were you able to do inference with this version of Llava? I tried setting up the hf-transformer patch of this but was unsuccessful.

P.S., For context, I am trying to run this on Colab Pro+

there are many ways, the guides explains quite good enough the steps for reaching inference. But this .. going into mainstream trasnsformers library is a must :)

Yeah, that's true. Luckily I was able to make it work and can do batch inferences now

Sign up or log in to comment