Vercel AI SDK

#24
by julianouxui - opened

I'm trying to use the vercel sdk but the application doesn't work and doesn't return an error.
Is anyone having this problem?
in addition to configuring the token in the .env, do you need to configure anything else here?

https://sdk.vercel.ai/docs/guides/huggingface

Not sure, but if the most basic way of loading the model with gradio already gives a error. Then i assum something is not right with it.

import gradio as gr
gr.Interface.load("models/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5").launch()

i'm use the exemple from github : ai/examples/next-huggingface/app/api/chat
/route.ts , and sucess.
import { HfInference } from '@huggingface/inference'
import { HuggingFaceStream, StreamingTextResponse } from 'ai'
import { experimental_buildOpenAssistantPrompt } from 'ai/prompts'

// Create a new HuggingFace Inference instance
const Hf = new HfInference(process.env.HUGGINGFACE_API_KEY)

// IMPORTANT! Set the runtime to edge
export const runtime = 'edge'

export async function POST(req: Request) {
// Extract the messages from the body of the request
const { messages } = await req.json()

const response = Hf.textGenerationStream({
model: 'OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5',
inputs: experimental_buildOpenAssistantPrompt(messages),
parameters: {
max_new_tokens: 200,
// @ts-ignore (this is a valid parameter specifically in OpenAssistant models)
typical_p: 0.2,
repetition_penalty: 1,
truncate: 1000,
return_full_text: false
}
})

// Convert the response into a friendly text-stream
const stream = HuggingFaceStream(response)

// Respond with the stream
return new StreamingTextResponse(stream)
}

Sign up or log in to comment