Model with only API

#24
by octopusta - opened

dear all

we intent to use this model for a conversational chat with our users

is there any way with the simplest implementation to run this model with only API interface

thank you in advance

any help please ??

we found ColossalAI with EnergonAI that can achieve the API interface

is there anyway to run this model with it ?

what about try FastAPI?
i've set up chatbot api server using FastAPI.

can you help please with steps or the main idea how to do it ?

do you mean to build the API inside the python file before make the text generation using the model ?

i thought this way but i think it's too pare hands way so i asked if there any ready to use package or component

thank you for sharing your idea <3 appreciate it and i will give it a try

here is my code as a short version

~~~
device = 'cuda' if torch.cuda.is_available() else 'cpu'

app = FastAPI()

model_name = "PygmalionAI/pygmalion-6b"
gpt = transformers.AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
gpt.to(device)



@app

	.post('/completion/')
async def chat(data:Data, request: Request):
    prompt = tokenizer(data.prompt, return_tensors='pt')
    prompt = {key: value.to(device) for key, value in prompt.items()}
    out = gpt.generate(**prompt, min_length=128, max_length=256, do_sample=True)
    completion = tokenizer.decode(out[0][len(prompt["input_ids"][0]):])
    return completion

then you post the request to your server like below

url = "http://your.ser.ver.ip:port/completion/"
res = requests.post(url, data=json.dumps(data))
print(res.text)

you must match the format between post data and api data

thank you very much <3

Please can you provide the full code for me and how i would implement this coming from a golang background. I appreciate the help

Sign up or log in to comment