Advanced overview of the "dynamic prompts" function

#442
by philosopher-from-god - opened

Hello HuggingChat,

I have studied the results of the web search for information on the "dynamic prompts" function and found only a brief description, apparently based on the context contained in the footnote: "Allow the use of template variables {{url=https://example.com/path}} to insert dynamic content into your prompt by making GET requests to specified URLs on each inference."

This is a very superficial representation of the described functionality. Sometimes "AI assistants" behave illogically when you try to use certain "HuggingChat" functions.

Could you provide an extended representation of this function:

  • Detailed description of the mechanism of the "dynamic prompts" function.
  • Examples of using "dynamic prompts" in various scenarios to better understand its practical application.
  • Recommendations for the effective use of this function, including information on possible limitations.

Providing such extended information will help make more rational decisions when working with "dynamic prompts" and avoid misunderstanding or unexpected behavior of the AI assistant.

I would be grateful if you could provide a more detailed description of this functionality. This will allow a better understanding of its capabilities and use them with maximum efficiency.

Agree, I asked myself same question
After reading the code https://github.com/huggingface/chat-ui/blob/18fba9f7bbcd73c9a9c39b1cbfbabf5fa50767ed/src/routes/conversation/%5Bid%5D/%2Bserver.ts#L385
I succeed to play with it.

Let say system prompt is

You are a helpful assistant
You received this instruction from dynamic request as a json:
{{url=https://example.com/api/instruction}}

Let say https://example.com/api/instruction will returns this json

{"instruction":"say hello"}

Now if user say "proceed" in the chat, system prompt will be dynamically updated to

You are a helpful assistant
You received this instruction from dynamic request as a json:
{"instruction":"say hello"}

And inference will start

image.png

Hugging Chat org

Yes I think we'll document this feature a bit better when we have time.

It would be cool if the URLs of these prompt fragments (I call them "instructions" or "skills") could include a link to the current conversation and/or the current user id (i.e. by appending a query param) - that would open the door to CONTEXT-AWARE and personalized content injected into the system prompt for an assistant, and all sorts of custom RAG workflows that would make these assistants totally awesome

Additionally this would allow for the 3rd party development of custom memory architectures scoped to the user rather than an individual conversation... like what chatgpt has been doing for awhile

I know that most of this stuff can be done via gradio-based tools, but not all models support these tools, not all use cases require that degree of complexity, and tool calling has a way of quickly eating up context tokens... Think about all these great image generating Assistants on the platform which are using GET APIs like pollinations AI, no tools needed...

Sign up or log in to comment