chansung commited on
Commit
30775d6
·
verified ·
1 Parent(s): 424f7e8

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +24 -0
  2. app.py +3 -1
  3. configs/prompts.toml +2 -2
README.md CHANGED
@@ -83,3 +83,27 @@ $ python main.py # or gradio main.py
83
 
84
  # Acknowledgments
85
  This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
 
84
  # Acknowledgments
85
  This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
86
+ # AdaptSum
87
+
88
+ AdaptSum stands for Adaptive Summarization. This project focuses on developing an LLM-powered system for dynamic summarization. Instead of generating entirely new summaries with each update, the system intelligently identifies and modifies only the necessary parts of the existing summary. This approach aims to create a more efficient and fluid summarization process within a continuous chat interaction with an LLM.
89
+
90
+ # Instructions
91
+
92
+ 1. Install dependencies
93
+ ```shell
94
+ $ pip install requirements.txt
95
+ ```
96
+
97
+ 2. Setup Gemini API Key
98
+ ```shell
99
+ $ export GEMINI_API_KEY=xxxxx
100
+ ```
101
+ > note that GEMINI API KEY should be obtained from Google AI Studio. Vertex AI is not supported at the moment (this is because Gemini SDK does not provide file uploading functionality for Vertex AI usage now).
102
+
103
+ 3. Run Gradio app
104
+ ```shell
105
+ $ python main.py # or gradio main.py
106
+ ```
107
+
108
+ # Acknowledgments
109
+ This is a project built during the Vertex sprints held by Google's ML Developer Programs team. We are thankful to be granted good amount of GCP credits to do this project.
app.py CHANGED
@@ -59,7 +59,9 @@ async def echo(message, history, state, persona):
59
  model_content_stream = await client.models.generate_content_stream(
60
  model=args.model,
61
  contents=state['messages'],
62
- config=types.GenerateContentConfig(seed=args.seed),
 
 
63
  )
64
  async for chunk in model_content_stream:
65
  response_chunks += chunk.text
 
59
  model_content_stream = await client.models.generate_content_stream(
60
  model=args.model,
61
  contents=state['messages'],
62
+ config=types.GenerateContentConfig(
63
+ system_instruction=system_instruction, seed=args.seed
64
+ ),
65
  )
66
  async for chunk in model_content_stream:
67
  response_chunks += chunk.text
configs/prompts.toml CHANGED
@@ -1,9 +1,9 @@
1
  [summarization]
2
  prompt = """
3
- **Summary:**
4
  $previous_summary
5
 
6
- **Last Conversation:**
7
  $latest_conversation
8
  """
9
 
 
1
  [summarization]
2
  prompt = """
3
+ Summary:
4
  $previous_summary
5
 
6
+ Last Conversation:
7
  $latest_conversation
8
  """
9