File size: 3,761 Bytes
3ebfb41
 
ab13803
1e333df
3ebfb41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ab13803
 
 
 
 
3ebfb41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ab13803
 
3ebfb41
 
 
 
 
 
ab13803
 
3ebfb41
f473fb8
 
3ebfb41
ab13803
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
from context import query
from openai import AsyncOpenAI
import panel as pn

# taken from fleet context
SYSTEM_PROMPT = """
You are an expert in Python libraries. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so.
Each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question.
Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either.
Your users are also in a CLI environment. You are capable of writing and running code. DO NOT write hypothetical code. ALWAYS write real code that will execute and run end-to-end.

Instructions:
- Be objective, direct. Include literal information from the context, don't add any conclusion or subjective information.
- When writing code, ALWAYS have some sort of output (like a print statement). If you're writing a function, call it at the end. Do not generate the output, because the user can run it themselves.
- ALWAYS cite your sources. Context will be given to you after the text ### Context source_url ### with source_url being the url to the file. For example, ### Context https://example.com/docs/api.html#files ### will have a source_url of https://example.com/docs/api.html#files.
- When you cite your source, please cite it as [num] with `num` starting at 1 and incrementing with each source cited (1, 2, 3, ...). At the bottom, have a newline-separated `num: source_url` at the end of the response. ALWAYS add a new line between sources or else the user won't be able to read it. DO NOT convert links into markdown, EVER! If you do that, the user will not be able to click on the links.
For example:
**Context 1**: https://example.com/docs/api.html#pdfs
I'm a big fan of PDFs.
**Context 2**: https://example.com/docs/api.html#csvs
I'm a big fan of CSVs.
### Prompt ###
What is this person a big fan of?
### Response ###
This person is a big fan of PDFs[1] and CSVs[2].
1: https://example.com/docs/api.html#pdfs
2: https://example.com/docs/api.html#csvs
"""

pn.extension()

MODEL = "gpt-3.5-turbo"


async def answer(contents, user, instance):
    # start with system prompt
    messages = [{"role": "system", "content": SYSTEM_PROMPT}]

    # add context to the user input
    context = ""
    fleet_responses = query(contents, k=3)
    for i, response in enumerate(fleet_responses):
        context += f"\n\n**Context {i}**: {response['metadata']['url']}\n{response['metadata']['text']}"
    instance.send(context, avatar="🛩️", user="Fleet Context", respond=False)

    # get history of messages (skipping the intro message)
    # and serialize fleet context messages as "user" role
    messages.extend(
        instance.serialize(role_names={"user": ["user", "Fleet Context"]})[1:]
    )

    openai_response = await client.chat.completions.create(
        model=MODEL, messages=messages, temperature=0.2, stream=True
    )

    message = ""
    async for chunk in openai_response:
        token = chunk.choices[0].delta.content
        if token:
            message += token
            yield message


client = AsyncOpenAI()
intro_message = pn.chat.ChatMessage("Ask me anything about Python libraries!", user="System")
chat_interface = pn.chat.ChatInterface(intro_message, callback=answer, callback_user="OpenAI")
template = pn.template.FastListTemplate(main=[chat_interface], title="Panel UI of Fleet Context 🛩️")
template.servable()