File size: 1,502 Bytes
47cc999
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
import os

import google.generativeai as genai
import mesop as me
from typing_extensions import TypedDict

from data_model import ChatMessage, State




class Pad(TypedDict):
  title: str
  content: str


class Output(TypedDict):
  intro: str
  pad: Pad
  conclusion: str


# Create the model
# See https://ai.google.dev/api/python/google/generativeai/GenerativeModel
generation_config = {
  "temperature": 1,
  "top_p": 0.95,
  "top_k": 64,
  "max_output_tokens": 8192,
  "response_mime_type": "text/plain",
  # "response_schema": Output,
}

def configure_gemini():
  state = me.state(State)
  genai.configure(api_key=state.gemini_api_key)

def send_prompt_pro(prompt: str, history: list[ChatMessage]):
  configure_gemini()

  model = genai.GenerativeModel(
    model_name="gemini-1.5-pro-latest",
    generation_config=generation_config,
  )

  chat_session = model.start_chat(
    history=[
      {"role": message.role, "parts": [message.content]} for message in history
    ]
  )

  for chunk in chat_session.send_message(prompt, stream=True):
    yield chunk.text


def send_prompt_flash(prompt: str, history: list[ChatMessage]):
  configure_gemini()

  model = genai.GenerativeModel(
    model_name="gemini-1.5-flash-latest",
    generation_config=generation_config,
  )

  chat_session = model.start_chat(
    history=[
      {"role": message.role, "parts": [message.content]} for message in history
    ]
  )

  for chunk in chat_session.send_message(prompt, stream=True):
    yield chunk.text