How to deploy this on Vertex AI?

#14
by ravra - opened

Hi,

I tried to deploy Model on Vertex AI, but it is giving error.

I am getting error like: ERROR 2023-05-31T07:38:18.376283832Z [resource.labels.taskName: workerpool0-0] File "main.py", line 8, in
{
"insertId": "1hjwmhtfm1rso4",
"jsonPayload": {
"message": " File "main.py", line 8, in \n",
"attrs": {
"tag": "workerpool0-0"
},
"levelname": "ERROR"
},
"resource": {
"type": "ml_job",
"labels": {
"job_id": "3056473000426602496",
"task_name": "workerpool0-0",
"project_id": "api-appexecutable-com"
}
},
"timestamp": "2023-05-31T07:38:18.376283832Z",
"severity": "ERROR",
"labels": {
"ml.googleapis.com/tpu_worker_id": "",
"compute.googleapis.com/resource_name": "cmle-training-18349684525105594269",
"ml.googleapis.com/trial_type": "",
"ml.googleapis.com/job_id/log_area": "root",
"ml.googleapis.com/trial_id": "",
"compute.googleapis.com/resource_id": "1504603656255391738",
"compute.googleapis.com/zone": "us-west1-b"
},
"logName": "projects/api-appexecutable-com/logs/workerpool0-0",
"receiveTimestamp": "2023-05-31T07:38:52.295831776Z"
}

---- main.py is:

from flask import Flask, request, jsonify
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

app = Flask(name)

tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/starchat-alpha")
model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/starchat-alpha",
load_in_8bit=True,
device_map='auto',
torch_dtype=torch.float16)

@app .route('/generate', methods=['POST'])
def generate():
data = request.json
input_prompt = data['prompt']
system_prompt = "\nBelow is a conversation between a human user and a helpful AI coding assistant.\n"
user_prompt = f"\n{input_prompt}\n"
assistant_prompt = ""
full_prompt = system_prompt + user_prompt + assistant_prompt
inputs = tokenizer.encode(full_prompt, return_tensors="pt").to('cuda')
outputs = model.generate(inputs,
eos_token_id = 0,
pad_token_id = 0,
max_length=256,
early_stopping=True)
output = tokenizer.decode(outputs[0])
output = output[len(full_prompt):]
if "" in output:
cutoff = output.find("")
output = output[:cutoff]
return jsonify({'response': output})

if name == 'main':
app.run(host='0.0.0.0', port=5000)

---- requirements.txt having

flask==2.0.1
torch==1.10.0
transformers==4.8.2

---- Dockerfile having

FROM python:3.8-slim-buster
WORKDIR /app
COPY . .
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "main.py"]

Can anyone guide me, where I am doing mistake

Sign up or log in to comment