.gitattributes CHANGED
@@ -33,4 +33,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- *.pptx filter=lfs diff=lfs merge=lfs -text
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
.streamlit/config.toml DELETED
@@ -1,10 +0,0 @@
1
- [server]
2
- runOnSave = true
3
- headless = false
4
- maxUploadSize = 0
5
-
6
- [browser]
7
- gatherUsageStats = false
8
-
9
- [theme]
10
- base = "dark"
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -4,7 +4,7 @@ emoji: 🏢
4
  colorFrom: yellow
5
  colorTo: green
6
  sdk: streamlit
7
- sdk_version: 1.32.2
8
  app_file: app.py
9
  pinned: false
10
  license: mit
@@ -16,54 +16,36 @@ We spend a lot of time on creating the slides and organizing our thoughts for an
16
  With SlideDeck AI, co-create slide decks on any topic with Generative Artificial Intelligence.
17
  Describe your topic and let SlideDeck AI generate a PowerPoint slide deck for you—it's as simple as that!
18
 
19
- SlideDeck AI is powered by [Mistral 7B Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
20
  Originally, it was built using the Llama 2 API provided by Clarifai.
21
 
22
- *Update (v4.0)*: Legacy SlideDeck AI allowed one-shot generation of a slide deck based on the inputs.
23
- In contrast, SlideDeck AI *Reloaded* enables an iterative workflow with a conversational interface,
24
- where you can create and improve the presentation.
25
-
26
-
27
  # Process
28
 
29
  SlideDeck AI works in the following way:
30
 
31
- 1. Given a topic description, it uses Mistral 7B Instruct to generate the *initial* content of the slides.
32
  The output is generated as structured JSON data based on a pre-defined schema.
33
  2. Subsequently, it uses the `python-pptx` library to generate the slides,
34
  based on the JSON data from the previous step.
35
- A user can choose from a set of three pre-defined presentation templates.
36
- 3. At this stage onward, a user can provide additional instructions to *refine* the content.
37
- For example, one can ask to add another slide or modify an existing slide.
38
- A history of instructions is maintained.
39
- 4. Every time SlideDeck AI generates a PowerPoint presentation, a download button is provided.
40
- Clicking on the button will download the file.
41
-
42
-
43
- # Known Issues
44
-
45
- - **Connection timeout**: Requests sent to the Hugging Face Inference endpoint might time out.
46
- A maximum of five retries are attempted. If it still does not work, wait for a while and try again.
47
 
48
- The following is not an issue but might appear as a strange behavior:
49
- - **Cannot paste text in the input box**: If the length of the copied text is greater than the maximum
50
- number of allowed characters in the textbox, pasting would not work.
51
 
52
 
53
  # Local Development
54
 
55
- SlideDeck AI uses [Mistral 7B Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
56
  via the Hugging Face Inference API.
57
- To run this project by yourself, you need to provide the `HUGGINGFACEHUB_API_TOKEN` API key,
58
  for example, in a `.env` file. Visit the respective websites to obtain the keys.
59
 
60
 
61
  # Live Demo
62
 
63
- - [SlideDeck AI](https://huggingface.co/spaces/barunsaha/slide-deck-ai) on Hugging Face Spaces
64
- - [Demo video](https://youtu.be/QvAKzNKtk9k) of the chat interface on YouTube
65
 
66
 
67
  # Award
68
 
69
- SlideDeck AI has won the 3rd Place in the [Llama 2 Hackathon with Clarifai](https://lablab.ai/event/llama-2-hackathon-with-clarifai) in 2023.
 
4
  colorFrom: yellow
5
  colorTo: green
6
  sdk: streamlit
7
+ sdk_version: 1.26.0
8
  app_file: app.py
9
  pinned: false
10
  license: mit
 
16
  With SlideDeck AI, co-create slide decks on any topic with Generative Artificial Intelligence.
17
  Describe your topic and let SlideDeck AI generate a PowerPoint slide deck for you—it's as simple as that!
18
 
19
+ SlideDeck AI is powered by [Mistral 7B Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
20
  Originally, it was built using the Llama 2 API provided by Clarifai.
21
 
 
 
 
 
 
22
  # Process
23
 
24
  SlideDeck AI works in the following way:
25
 
26
+ 1. Given a topic description, it uses Mistral 7B Instruct to generate the outline/contents of the slides.
27
  The output is generated as structured JSON data based on a pre-defined schema.
28
  2. Subsequently, it uses the `python-pptx` library to generate the slides,
29
  based on the JSON data from the previous step.
30
+ Here, a user can choose from a set of three pre-defined presentation templates.
31
+ 3. In addition, it uses Metaphor to fetch Web pages related to the topic.
 
 
 
 
 
 
 
 
 
 
32
 
33
+ 4. ~~Finally, it uses Stable Diffusion 2 to generate an image, based on the title and each slide heading.~~
 
 
34
 
35
 
36
  # Local Development
37
 
38
+ SlideDeck AI uses [Mistral 7B Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
39
  via the Hugging Face Inference API.
40
+ To run this project by yourself, you need to provide the `HUGGINGFACEHUB_API_TOKEN` and `METAPHOR_API_KEY` API keys,
41
  for example, in a `.env` file. Visit the respective websites to obtain the keys.
42
 
43
 
44
  # Live Demo
45
 
46
+ [SlideDeck AI](https://huggingface.co/spaces/barunsaha/slide-deck-ai)
 
47
 
48
 
49
  # Award
50
 
51
+ SlideDeck AI has won the 3rd Place in the [Llama 2 Hackathon with Clarifai](https://lablab.ai/event/llama-2-hackathon-with-clarifai).
app.py CHANGED
@@ -1,377 +1,310 @@
1
- import datetime
2
- import logging
3
  import pathlib
4
- import random
5
  import tempfile
6
- from typing import List
7
 
8
  import json5
 
9
  import streamlit as st
10
- from langchain_community.chat_message_histories import (
11
- StreamlitChatMessageHistory
12
- )
13
- from langchain_core.messages import HumanMessage
14
- from langchain_core.prompts import ChatPromptTemplate
15
- # from transformers import AutoTokenizer
16
 
 
 
17
  from global_config import GlobalConfig
18
- from helpers import llm_helper, pptx_helper, text_helper
 
 
 
 
 
 
 
 
 
19
 
20
 
21
  @st.cache_data
22
- def _load_strings() -> dict:
23
  """
24
- Load various strings to be displayed in the app.
25
- :return: The dictionary of strings.
 
 
26
  """
27
 
28
- with open(GlobalConfig.APP_STRINGS_FILE, 'r', encoding='utf-8') as in_file:
29
- return json5.loads(in_file.read())
30
 
31
 
32
- @st.cache_data
33
- def _get_prompt_template(is_refinement: bool) -> str:
34
  """
35
- Return a prompt template.
36
 
37
- :param is_refinement: Whether this is the initial or refinement prompt.
38
- :return: The prompt template as f-string.
39
  """
40
 
41
- if is_refinement:
42
- with open(GlobalConfig.REFINEMENT_PROMPT_TEMPLATE, 'r', encoding='utf-8') as in_file:
43
- template = in_file.read()
44
- else:
45
- with open(GlobalConfig.INITIAL_PROMPT_TEMPLATE, 'r', encoding='utf-8') as in_file:
46
- template = in_file.read()
47
 
48
- return template
49
 
 
 
 
 
50
 
51
- # @st.cache_resource
52
- # def _get_tokenizer() -> AutoTokenizer:
53
- # """
54
- # Get Mistral tokenizer for counting tokens.
55
- #
56
- # :return: The tokenizer.
57
- # """
58
- #
59
- # return AutoTokenizer.from_pretrained(
60
- # pretrained_model_name_or_path=GlobalConfig.HF_LLM_MODEL_NAME
61
- # )
62
 
 
 
 
 
 
 
63
 
64
- APP_TEXT = _load_strings()
 
65
 
66
- # Session variables
67
- CHAT_MESSAGES = 'chat_messages'
68
- DOWNLOAD_FILE_KEY = 'download_file_name'
69
- IS_IT_REFINEMENT = 'is_it_refinement'
70
 
71
- logger = logging.getLogger(__name__)
72
- progress_bar = st.progress(0, text='Setting up SlideDeck AI...')
73
 
74
- texts = list(GlobalConfig.PPTX_TEMPLATE_FILES.keys())
75
- captions = [GlobalConfig.PPTX_TEMPLATE_FILES[x]['caption'] for x in texts]
76
- pptx_template = st.sidebar.radio(
77
- 'Select a presentation template:',
78
- texts,
79
- captions=captions,
80
- horizontal=True
81
- )
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
 
84
- def display_page_header_content():
85
  """
86
- Display content in the page header.
87
  """
88
 
 
 
89
  st.title(APP_TEXT['app_name'])
90
  st.subheader(APP_TEXT['caption'])
91
  st.markdown(
92
- '![Visitors](https://api.visitorbadge.io/api/visitors?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2Fbarunsaha%2Fslide-deck-ai&countColor=%23263759)' # noqa: E501
 
 
 
 
 
93
  )
94
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
- def display_page_footer_content():
97
- """
98
- Display content in the page footer.
99
- """
100
-
101
- st.text(APP_TEXT['tos'] + '\n\n' + APP_TEXT['tos2'])
102
 
 
 
 
 
 
 
103
 
104
- def build_ui():
105
- """
106
- Display the input elements for content generation.
107
- """
108
 
109
- display_page_header_content()
 
 
110
 
111
- with st.expander('Usage Policies and Limitations'):
112
- display_page_footer_content()
 
 
113
 
114
- progress_bar.progress(50, text='Setting up chat interface...')
115
- set_up_chat_ui()
116
 
 
 
 
117
 
118
- def set_up_chat_ui():
119
- """
120
- Prepare the chat interface and related functionality.
121
- """
122
 
123
- with st.expander('Usage Instructions'):
124
- st.markdown(GlobalConfig.CHAT_USAGE_INSTRUCTIONS)
125
- st.markdown(
126
- 'SlideDeck AI is powered by'
127
- ' [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)'
128
- )
129
 
130
- # view_messages = st.expander('View the messages in the session state')
 
 
131
 
132
- st.chat_message('ai').write(
133
- random.choice(APP_TEXT['ai_greetings'])
134
- )
135
- progress_bar.progress(100, text='Done!')
136
- progress_bar.empty()
137
 
138
- history = StreamlitChatMessageHistory(key=CHAT_MESSAGES)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
139
 
140
- if _is_it_refinement():
141
- template = _get_prompt_template(is_refinement=True)
142
  else:
143
- template = _get_prompt_template(is_refinement=False)
144
-
145
- prompt_template = ChatPromptTemplate.from_template(template)
146
-
147
- # Since Streamlit app reloads at every interaction, display the chat history
148
- # from the save session state
149
- for msg in history.messages:
150
- msg_type = msg.type
151
- if msg_type == 'user':
152
- st.chat_message(msg_type).write(msg.content)
153
- else:
154
- st.chat_message(msg_type).code(msg.content, language='json')
155
-
156
- if prompt := st.chat_input(
157
- placeholder=APP_TEXT['chat_placeholder'],
158
- max_chars=GlobalConfig.LLM_MODEL_MAX_INPUT_LENGTH
159
- ):
160
-
161
- progress_bar_pptx = st.progress(0, 'Preparing to run...')
162
- if not text_helper.is_valid_prompt(prompt):
163
- st.error(
164
- 'Not enough information provided!'
165
- ' Please be a little more descriptive and type a few words'
166
- ' with a few characters :)'
167
- )
168
- return
169
-
170
- logger.info('User input: %s | #characters: %d', prompt, len(prompt))
171
- st.chat_message('user').write(prompt)
172
-
173
- user_messages = _get_user_messages()
174
- user_messages.append(prompt)
175
- list_of_msgs = [
176
- f'{idx + 1}. {msg}' for idx, msg in enumerate(user_messages)
177
- ]
178
- list_of_msgs = '\n'.join(list_of_msgs)
179
-
180
- if _is_it_refinement():
181
- formatted_template = prompt_template.format(
182
- **{
183
- 'instructions': list_of_msgs,
184
- 'previous_content': _get_last_response()
185
- }
186
- )
187
- else:
188
- formatted_template = prompt_template.format(
189
- **{
190
- 'question': prompt,
191
- }
192
- )
193
-
194
- progress_bar_pptx.progress(5, 'Calling LLM...will retry if connection times out...')
195
- response: dict = llm_helper.hf_api_query({
196
- 'inputs': formatted_template,
197
- 'parameters': {
198
- 'temperature': GlobalConfig.LLM_MODEL_TEMPERATURE,
199
- 'min_length': GlobalConfig.LLM_MODEL_MIN_OUTPUT_LENGTH,
200
- 'max_length': GlobalConfig.LLM_MODEL_MAX_OUTPUT_LENGTH,
201
- 'max_new_tokens': GlobalConfig.LLM_MODEL_MAX_OUTPUT_LENGTH,
202
- 'num_return_sequences': 1,
203
- 'return_full_text': False,
204
- # "repetition_penalty": 0.0001
205
- },
206
- 'options': {
207
- 'wait_for_model': True,
208
- 'use_cache': True
209
- }
210
- })
211
-
212
- if len(response) > 0 and 'generated_text' in response[0]:
213
- response: str = response[0]['generated_text'].strip()
214
-
215
- st.chat_message('ai').code(response, language='json')
216
-
217
- history.add_user_message(prompt)
218
- history.add_ai_message(response)
219
-
220
- # if GlobalConfig.COUNT_TOKENS:
221
- # tokenizer = _get_tokenizer()
222
- # tokens_count_in = len(tokenizer.tokenize(formatted_template))
223
- # tokens_count_out = len(tokenizer.tokenize(response))
224
- # logger.debug(
225
- # 'Tokens count:: input: %d, output: %d',
226
- # tokens_count_in, tokens_count_out
227
- # )
228
-
229
- # _display_messages_history(view_messages)
230
-
231
- # The content has been generated as JSON
232
- # There maybe trailing ``` at the end of the response -- remove them
233
- # To be careful: ``` may be part of the content as well when code is generated
234
- progress_bar_pptx.progress(50, 'Analyzing response...')
235
- response_cleaned = text_helper.get_clean_json(response)
236
-
237
- logger.info(
238
- 'Cleaned JSON response:: original length: %d | cleaned length: %d',
239
- len(response), len(response_cleaned)
240
- )
241
- logger.debug('Cleaned JSON: %s', response_cleaned)
242
 
243
- # Now create the PPT file
244
- progress_bar_pptx.progress(75, 'Creating the slide deck...give it a moment...')
245
- generate_slide_deck(response_cleaned)
246
- progress_bar_pptx.progress(100, text='Done!')
247
 
248
- logger.info(
249
- '#messages in history / 2: %d',
250
- len(st.session_state[CHAT_MESSAGES]) / 2
251
- )
252
-
253
-
254
- def generate_slide_deck(json_str: str):
255
  """
256
- Create a slide deck.
257
 
258
- :param json_str: The content in *valid* JSON format.
 
 
259
  """
260
 
261
- if DOWNLOAD_FILE_KEY in st.session_state:
262
- path = pathlib.Path(st.session_state[DOWNLOAD_FILE_KEY])
263
- else:
264
- temp = tempfile.NamedTemporaryFile(delete=False, suffix='.pptx')
265
- path = pathlib.Path(temp.name)
266
- st.session_state[DOWNLOAD_FILE_KEY] = str(path)
267
-
268
- if temp:
269
- temp.close()
270
-
271
- logger.debug('Creating PPTX file: %s...', st.session_state[DOWNLOAD_FILE_KEY])
272
 
273
  try:
274
- pptx_helper.generate_powerpoint_presentation(
275
- json_str,
276
- slides_template=pptx_template,
277
- output_file_path=path
278
- )
279
- except ValueError:
280
- # st.error(
281
- # f"{APP_TEXT['json_parsing_error']}"
282
- # f"\n\nAdditional error info: {ve}"
283
- # f"\n\nHere are some sample instructions that you could try to possibly fix this error;"
284
- # f" if these don't work, try rephrasing or refreshing:"
285
- # f"\n\n"
286
- # "- Regenerate content and fix the JSON error."
287
- # "\n- Regenerate content and fix the JSON error. Quotes inside quotes should be escaped."
288
- # )
289
- # logger.error('%s', APP_TEXT['json_parsing_error'])
290
- # logger.error('Additional error info: %s', str(ve))
291
  st.error(
292
- 'Encountered error while parsing JSON...will fix it and retry'
293
- )
294
- logger.error(
295
- 'Caught ValueError: trying again after repairing JSON...'
296
  )
297
 
298
- pptx_helper.generate_powerpoint_presentation(
299
- text_helper.fix_malformed_json(json_str),
300
- slides_template=pptx_template,
301
- output_file_path=path
302
- )
303
- except Exception as ex:
304
- st.error(APP_TEXT['content_generation_error'])
305
- logger.error('Caught a generic exception: %s', str(ex))
306
- finally:
307
- _display_download_button(path)
308
-
309
-
310
- def _is_it_refinement() -> bool:
311
- """
312
- Whether it is the initial prompt or a refinement.
313
 
314
- :return: True if it is the initial prompt; False otherwise.
315
- """
316
-
317
- if IS_IT_REFINEMENT in st.session_state:
318
- return True
319
-
320
- if len(st.session_state[CHAT_MESSAGES]) >= 2:
321
- # Prepare for the next call
322
- st.session_state[IS_IT_REFINEMENT] = True
323
- return True
324
 
325
- return False
326
 
327
 
328
- def _get_user_messages() -> List[str]:
329
- """
330
- Get a list of user messages submitted until now from the session state.
331
-
332
- :return: The list of user messages.
333
  """
 
334
 
335
- return [
336
- msg.content for msg in st.session_state[CHAT_MESSAGES] if isinstance(msg, HumanMessage)
337
- ]
338
-
339
-
340
- def _get_last_response() -> str:
341
  """
342
- Get the last response generated by AI.
343
 
344
- :return: The response text.
345
- """
346
 
347
- return st.session_state[CHAT_MESSAGES][-1].content
 
 
 
 
348
 
 
 
349
 
350
- def _display_messages_history(view_messages: st.expander):
351
- """
352
- Display the history of messages.
 
 
 
 
 
353
 
354
- :param view_messages: The list of AI and Human messages.
355
- """
356
 
357
- with view_messages:
358
- view_messages.json(st.session_state[CHAT_MESSAGES])
359
 
360
 
361
- def _display_download_button(file_path: pathlib.Path):
362
  """
363
- Display a download button to download a slide deck.
364
 
365
- :param file_path: The path of the .pptx file.
366
  """
367
 
368
- with open(file_path, 'rb') as download_file:
369
- st.download_button(
370
- 'Download PPTX file ⬇️',
371
- data=download_file,
372
- file_name='Presentation.pptx',
373
- key=datetime.datetime.now()
374
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
375
 
376
 
377
  def main():
 
 
 
1
  import pathlib
2
+ import logging
3
  import tempfile
4
+ from typing import List, Tuple
5
 
6
  import json5
7
+ import metaphor_python as metaphor
8
  import streamlit as st
 
 
 
 
 
 
9
 
10
+ import llm_helper
11
+ import pptx_helper
12
  from global_config import GlobalConfig
13
+
14
+
15
+ APP_TEXT = json5.loads(open(GlobalConfig.APP_STRINGS_FILE, 'r', encoding='utf-8').read())
16
+ GB_CONVERTER = 2 ** 30
17
+
18
+
19
+ logging.basicConfig(
20
+ level=GlobalConfig.LOG_LEVEL,
21
+ format='%(asctime)s - %(message)s',
22
+ )
23
 
24
 
25
  @st.cache_data
26
+ def get_contents_wrapper(text: str) -> str:
27
  """
28
+ Fetch and cache the slide deck contents on a topic by calling an external API.
29
+
30
+ :param text: The presentation topic
31
+ :return: The slide deck contents or outline in JSON format
32
  """
33
 
34
+ logging.info('LLM call because of cache miss...')
35
+ return llm_helper.generate_slides_content(text).strip()
36
 
37
 
38
+ @st.cache_resource
39
+ def get_metaphor_client_wrapper() -> metaphor.Metaphor:
40
  """
41
+ Create a Metaphor client for semantic Web search.
42
 
43
+ :return: Metaphor instance
 
44
  """
45
 
46
+ return metaphor.Metaphor(api_key=GlobalConfig.METAPHOR_API_KEY)
 
 
 
 
 
47
 
 
48
 
49
+ @st.cache_data
50
+ def get_web_search_results_wrapper(text: str) -> List[Tuple[str, str]]:
51
+ """
52
+ Fetch and cache the Web search results on a given topic.
53
 
54
+ :param text: The topic
55
+ :return: A list of (title, link) tuples
56
+ """
 
 
 
 
 
 
 
 
57
 
58
+ results = []
59
+ search_results = get_metaphor_client_wrapper().search(
60
+ text,
61
+ use_autoprompt=True,
62
+ num_results=5
63
+ )
64
 
65
+ for a_result in search_results.results:
66
+ results.append((a_result.title, a_result.url))
67
 
68
+ return results
 
 
 
69
 
 
 
70
 
71
+ # def get_disk_used_percentage() -> float:
72
+ # """
73
+ # Compute the disk usage.
74
+ #
75
+ # :return: Percentage of the disk space currently used
76
+ # """
77
+ #
78
+ # total, used, free = shutil.disk_usage(__file__)
79
+ # total = total // GB_CONVERTER
80
+ # used = used // GB_CONVERTER
81
+ # free = free // GB_CONVERTER
82
+ # used_perc = 100.0 * used / total
83
+ #
84
+ # logging.debug(f'Total: {total} GB\n'
85
+ # f'Used: {used} GB\n'
86
+ # f'Free: {free} GB')
87
+ #
88
+ # logging.debug('\n'.join(os.listdir()))
89
+ #
90
+ # return used_perc
91
 
92
 
93
+ def build_ui():
94
  """
95
+ Display the input elements for content generation. Only covers the first step.
96
  """
97
 
98
+ # get_disk_used_percentage()
99
+
100
  st.title(APP_TEXT['app_name'])
101
  st.subheader(APP_TEXT['caption'])
102
  st.markdown(
103
+ 'Powered by'
104
+ ' [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).'
105
+ )
106
+ st.markdown(
107
+ '*If the JSON is generated or parsed incorrectly, try again later by making minor changes'
108
+ ' to the input text.*'
109
  )
110
 
111
+ with st.form('my_form'):
112
+ # Topic input
113
+ try:
114
+ with open(GlobalConfig.PRELOAD_DATA_FILE, 'r', encoding='utf-8') as in_file:
115
+ preload_data = json5.loads(in_file.read())
116
+ except (FileExistsError, FileNotFoundError):
117
+ preload_data = {'topic': '', 'audience': ''}
118
+
119
+ topic = st.text_area(
120
+ APP_TEXT['input_labels'][0],
121
+ value=preload_data['topic']
122
+ )
123
 
124
+ texts = list(GlobalConfig.PPTX_TEMPLATE_FILES.keys())
125
+ captions = [GlobalConfig.PPTX_TEMPLATE_FILES[x]['caption'] for x in texts]
 
 
 
 
126
 
127
+ pptx_template = st.radio(
128
+ 'Select a presentation template:',
129
+ texts,
130
+ captions=captions,
131
+ horizontal=True
132
+ )
133
 
134
+ st.divider()
135
+ submit = st.form_submit_button('Generate slide deck')
 
 
136
 
137
+ if submit:
138
+ # st.write(f'Clicked {time.time()}')
139
+ st.session_state.submitted = True
140
 
141
+ # https://github.com/streamlit/streamlit/issues/3832#issuecomment-1138994421
142
+ if 'submitted' in st.session_state:
143
+ progress_text = 'Generating the slides...give it a moment'
144
+ progress_bar = st.progress(0, text=progress_text)
145
 
146
+ topic_txt = topic.strip()
147
+ generate_presentation(topic_txt, pptx_template, progress_bar)
148
 
149
+ st.divider()
150
+ st.text(APP_TEXT['tos'])
151
+ st.text(APP_TEXT['tos2'])
152
 
153
+ st.markdown(
154
+ '![Visitors]'
155
+ '(https://api.visitorbadge.io/api/visitors?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2Fbarunsaha%2Fslide-deck-ai&countColor=%23263759)'
156
+ )
157
 
 
 
 
 
 
 
158
 
159
+ def generate_presentation(topic: str, pptx_template: str, progress_bar):
160
+ """
161
+ Process the inputs to generate the slides.
162
 
163
+ :param topic: The presentation topic based on which contents are to be generated
164
+ :param pptx_template: The PowerPoint template name to be used
165
+ :param progress_bar: Progress bar from the page
166
+ :return:
167
+ """
168
 
169
+ topic_length = len(topic)
170
+ logging.debug('Input length:: topic: %s', topic_length)
171
+
172
+ if topic_length >= 10:
173
+ logging.debug('Topic: %s', topic)
174
+ target_length = min(topic_length, GlobalConfig.LLM_MODEL_MAX_INPUT_LENGTH)
175
+
176
+ try:
177
+ # Step 1: Generate the contents in JSON format using an LLM
178
+ json_str = process_slides_contents(topic[:target_length], progress_bar)
179
+ logging.debug('Truncated topic: %s', topic[:target_length])
180
+ logging.debug('Length of JSON: %d', len(json_str))
181
+
182
+ # Step 2: Generate the slide deck based on the template specified
183
+ if len(json_str) > 0:
184
+ st.info(
185
+ 'Tip: The generated content doesn\'t look so great?'
186
+ ' Need alternatives? Just change your description text and try again.',
187
+ icon="💡️"
188
+ )
189
+ else:
190
+ st.error(
191
+ 'Unfortunately, JSON generation failed, so the next steps would lead'
192
+ ' to nowhere. Try again or come back later.'
193
+ )
194
+ return
195
+
196
+ all_headers = generate_slide_deck(json_str, pptx_template, progress_bar)
197
+
198
+ # Step 3: Bonus stuff: Web references and AI art
199
+ show_bonus_stuff(all_headers)
200
+
201
+ except ValueError as ve:
202
+ st.error(f'Unfortunately, an error occurred: {ve}! '
203
+ f'Please change the text, try again later, or report it, sharing your inputs.')
204
 
 
 
205
  else:
206
+ st.error('Not enough information provided! Please be little more descriptive :)')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
207
 
 
 
 
 
208
 
209
+ def process_slides_contents(text: str, progress_bar: st.progress) -> str:
 
 
 
 
 
 
210
  """
211
+ Convert given text into structured data and display. Update the UI.
212
 
213
+ :param text: The topic description for the presentation
214
+ :param progress_bar: Progress bar for this step
215
+ :return: The contents as a JSON-formatted string
216
  """
217
 
218
+ json_str = ''
 
 
 
 
 
 
 
 
 
 
219
 
220
  try:
221
+ logging.info('Calling LLM for content generation on the topic: %s', text)
222
+ json_str = get_contents_wrapper(text)
223
+ except Exception as ex:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
224
  st.error(
225
+ f'An exception occurred while trying to convert to JSON. It could be because of heavy'
226
+ f' traffic or something else. Try doing it again or try again later.'
227
+ f'\nError message: {ex}'
 
228
  )
229
 
230
+ progress_bar.progress(50, text='Contents generated')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
231
 
232
+ with st.expander('The generated contents (in JSON format)'):
233
+ st.code(json_str, language='json')
 
 
 
 
 
 
 
 
234
 
235
+ return json_str
236
 
237
 
238
+ def generate_slide_deck(json_str: str, pptx_template: str, progress_bar) -> List:
 
 
 
 
239
  """
240
+ Create a slide deck.
241
 
242
+ :param json_str: The contents in JSON format
243
+ :param pptx_template: The PPTX template name
244
+ :param progress_bar: Progress bar
245
+ :return: A list of all slide headers and the title
 
 
246
  """
 
247
 
248
+ progress_text = 'Creating the slide deck...give it a moment'
249
+ progress_bar.progress(75, text=progress_text)
250
 
251
+ # # Get a unique name for the file to save -- use the session ID
252
+ # ctx = st_sr.get_script_run_ctx()
253
+ # session_id = ctx.session_id
254
+ # timestamp = time.time()
255
+ # output_file_name = f'{session_id}_{timestamp}.pptx'
256
 
257
+ temp = tempfile.NamedTemporaryFile(delete=False, suffix='.pptx')
258
+ path = pathlib.Path(temp.name)
259
 
260
+ logging.info('Creating PPTX file...')
261
+ all_headers = pptx_helper.generate_powerpoint_presentation(
262
+ json_str,
263
+ as_yaml=False,
264
+ slides_template=pptx_template,
265
+ output_file_path=path
266
+ )
267
+ progress_bar.progress(100, text='Done!')
268
 
269
+ with open(path, 'rb') as f:
270
+ st.download_button('Download PPTX file', f, file_name='Presentation.pptx')
271
 
272
+ return all_headers
 
273
 
274
 
275
+ def show_bonus_stuff(ppt_headers: List[str]):
276
  """
277
+ Show bonus stuff for the presentation.
278
 
279
+ :param ppt_headers: A list of the slide headings.
280
  """
281
 
282
+ # Use the presentation title and the slide headers to find relevant info online
283
+ logging.info('Calling Metaphor search...')
284
+ ppt_text = ' '.join(ppt_headers)
285
+ search_results = get_web_search_results_wrapper(ppt_text)
286
+ md_text_items = []
287
+
288
+ for (title, link) in search_results:
289
+ md_text_items.append(f'[{title}]({link})')
290
+
291
+ with st.expander('Related Web references'):
292
+ st.markdown('\n\n'.join(md_text_items))
293
+
294
+ logging.info('Done!')
295
+
296
+ # # Avoid image generation. It costs time and an API call, so just limit to the text generation.
297
+ # with st.expander('AI-generated image on the presentation topic'):
298
+ # logging.info('Calling SDXL for image generation...')
299
+ # # img_empty.write('')
300
+ # # img_text.write(APP_TEXT['image_info'])
301
+ # image = get_ai_image_wrapper(ppt_text)
302
+ #
303
+ # if len(image) > 0:
304
+ # image = base64.b64decode(image)
305
+ # st.image(image, caption=ppt_text)
306
+ # st.info('Tip: Right-click on the image to save it.', icon="💡️")
307
+ # logging.info('Image added')
308
 
309
 
310
  def main():
clarifai_grpc_helper.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from clarifai_grpc.channel.clarifai_channel import ClarifaiChannel
2
+ from clarifai_grpc.grpc.api import resources_pb2, service_pb2, service_pb2_grpc
3
+ from clarifai_grpc.grpc.api.status import status_code_pb2
4
+
5
+ from global_config import GlobalConfig
6
+
7
+
8
+ CHANNEL = ClarifaiChannel.get_grpc_channel()
9
+ STUB = service_pb2_grpc.V2Stub(CHANNEL)
10
+
11
+ METADATA = (
12
+ ('authorization', 'Key ' + GlobalConfig.CLARIFAI_PAT),
13
+ )
14
+
15
+ USER_DATA_OBJECT = resources_pb2.UserAppIDSet(
16
+ user_id=GlobalConfig.CLARIFAI_USER_ID,
17
+ app_id=GlobalConfig.CLARIFAI_APP_ID
18
+ )
19
+
20
+ RAW_TEXT = '''You are a helpful, intelligent chatbot. Create the slides for a presentation on the given topic. Include main headings for each slide, detailed bullet points for each slide. Add relevant content to each slide. Do not output any blank line.
21
+
22
+ Topic:
23
+ Talk about AI, covering what it is and how it works. Add its pros, cons, and future prospects. Also, cover its job prospects.
24
+ '''
25
+
26
+
27
+ def get_text_from_llm(prompt: str) -> str:
28
+ post_model_outputs_response = STUB.PostModelOutputs(
29
+ service_pb2.PostModelOutputsRequest(
30
+ user_app_id=USER_DATA_OBJECT, # The userDataObject is created in the overview and is required when using a PAT
31
+ model_id=GlobalConfig.CLARIFAI_MODEL_ID,
32
+ # version_id=MODEL_VERSION_ID, # This is optional. Defaults to the latest model version
33
+ inputs=[
34
+ resources_pb2.Input(
35
+ data=resources_pb2.Data(
36
+ text=resources_pb2.Text(
37
+ raw=prompt
38
+ )
39
+ )
40
+ )
41
+ ]
42
+ ),
43
+ metadata=METADATA
44
+ )
45
+
46
+ if post_model_outputs_response.status.code != status_code_pb2.SUCCESS:
47
+ print(post_model_outputs_response.status)
48
+ raise Exception(f"Post model outputs failed, status: {post_model_outputs_response.status.description}")
49
+
50
+ # Since we have one input, one output will exist here
51
+ output = post_model_outputs_response.outputs[0]
52
+
53
+ # print("Completion:\n")
54
+ # print(output.data.text.raw)
55
+
56
+ return output.data.text.raw
57
+
58
+
59
+ if __name__ == '__main__':
60
+ topic = ('Talk about AI, covering what it is and how it works.'
61
+ ' Add its pros, cons, and future prospects.'
62
+ ' Also, cover its job prospects.'
63
+ )
64
+ print(topic)
65
+
66
+ with open(GlobalConfig.SLIDES_TEMPLATE_FILE, 'r') as in_file:
67
+ prompt_txt = in_file.read()
68
+ prompt_txt = prompt_txt.replace('{topic}', topic)
69
+ response_txt = get_text_from_llm(prompt_txt)
70
+
71
+ print('Output:\n', response_txt)
examples/example_04.json DELETED
@@ -1,3 +0,0 @@
1
- {
2
- "topic": "12 slides on a basic tutorial on Python along with examples"
3
- }
 
 
 
 
global_config.py CHANGED
@@ -1,4 +1,3 @@
1
- import logging
2
  import os
3
 
4
  from dataclasses import dataclass
@@ -13,25 +12,22 @@ class GlobalConfig:
13
  HF_LLM_MODEL_NAME = 'mistralai/Mistral-7B-Instruct-v0.2'
14
  LLM_MODEL_TEMPERATURE: float = 0.2
15
  LLM_MODEL_MIN_OUTPUT_LENGTH: int = 50
16
- LLM_MODEL_MAX_OUTPUT_LENGTH: int = 4096
17
- LLM_MODEL_MAX_INPUT_LENGTH: int = 750
18
 
19
  HUGGINGFACEHUB_API_TOKEN = os.environ.get('HUGGINGFACEHUB_API_TOKEN', '')
20
  METAPHOR_API_KEY = os.environ.get('METAPHOR_API_KEY', '')
21
 
22
  LOG_LEVEL = 'DEBUG'
23
- COUNT_TOKENS = False
24
  APP_STRINGS_FILE = 'strings.json'
25
  PRELOAD_DATA_FILE = 'examples/example_02.json'
26
  SLIDES_TEMPLATE_FILE = 'langchain_templates/template_combined.txt'
27
- # JSON_TEMPLATE_FILE = 'langchain_templates/text_to_json_template_02.txt'
28
- INITIAL_PROMPT_TEMPLATE = 'langchain_templates/chat_prompts/initial_template_v3_two_cols.txt'
29
- REFINEMENT_PROMPT_TEMPLATE = 'langchain_templates/chat_prompts/refinement_template_v3_two_cols.txt'
30
 
31
  PPTX_TEMPLATE_FILES = {
32
- 'Basic': {
33
  'file': 'pptx_templates/Blank.pptx',
34
- 'caption': 'A good start (Uses [photos](https://unsplash.com/photos/AFZ-qBPEceA) by [cetteup](https://unsplash.com/@cetteup?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash) on [Unsplash](https://unsplash.com/photos/a-foggy-forest-filled-with-lots-of-trees-d3ci37Gcgxg?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash))'
35
  },
36
  'Ion Boardroom': {
37
  'file': 'pptx_templates/Ion_Boardroom.pptx',
@@ -42,30 +38,3 @@ class GlobalConfig:
42
  'caption': 'Marvel in a monochrome dream'
43
  }
44
  }
45
-
46
- # This is a long text, so not incorporated as a string in `strings.json`
47
- CHAT_USAGE_INSTRUCTIONS = (
48
- 'Briefly describe your topic of presentation in the textbox provided below.'
49
- ' For example:\n'
50
- '- Make a slide deck on AI.'
51
- '\n\n'
52
- 'Subsequently, you can add follow-up instructions, e.g.:\n'
53
- '- Can you add a slide on GPUs?'
54
- '\n\n'
55
- ' You can also ask it to refine any particular slide, e.g.:\n'
56
- '- Make the slide with title \'Examples of AI\' a bit more descriptive.'
57
- '\n\n'
58
- 'See this [demo video](https://youtu.be/QvAKzNKtk9k) for a brief walkthrough.'
59
- 'SlideDeck AI does not have access to the Web.'
60
- '\n\n'
61
- 'If you like SlideDeck AI, please consider leaving a heart ❤️ on the'
62
- ' [Hugging Face Space](https://huggingface.co/spaces/barunsaha/slide-deck-ai/) or'
63
- ' a star ⭐ on [GitHub](https://github.com/barun-saha/slide-deck-ai).'
64
- )
65
-
66
-
67
- logging.basicConfig(
68
- level=GlobalConfig.LOG_LEVEL,
69
- format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
70
- datefmt='%Y-%m-%d %H:%M:%S'
71
- )
 
 
1
  import os
2
 
3
  from dataclasses import dataclass
 
12
  HF_LLM_MODEL_NAME = 'mistralai/Mistral-7B-Instruct-v0.2'
13
  LLM_MODEL_TEMPERATURE: float = 0.2
14
  LLM_MODEL_MIN_OUTPUT_LENGTH: int = 50
15
+ LLM_MODEL_MAX_OUTPUT_LENGTH: int = 2000
16
+ LLM_MODEL_MAX_INPUT_LENGTH: int = 300
17
 
18
  HUGGINGFACEHUB_API_TOKEN = os.environ.get('HUGGINGFACEHUB_API_TOKEN', '')
19
  METAPHOR_API_KEY = os.environ.get('METAPHOR_API_KEY', '')
20
 
21
  LOG_LEVEL = 'DEBUG'
 
22
  APP_STRINGS_FILE = 'strings.json'
23
  PRELOAD_DATA_FILE = 'examples/example_02.json'
24
  SLIDES_TEMPLATE_FILE = 'langchain_templates/template_combined.txt'
25
+ JSON_TEMPLATE_FILE = 'langchain_templates/text_to_json_template_02.txt'
 
 
26
 
27
  PPTX_TEMPLATE_FILES = {
28
+ 'Blank': {
29
  'file': 'pptx_templates/Blank.pptx',
30
+ 'caption': 'A good start'
31
  },
32
  'Ion Boardroom': {
33
  'file': 'pptx_templates/Ion_Boardroom.pptx',
 
38
  'caption': 'Marvel in a monochrome dream'
39
  }
40
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
helpers/__init__.py DELETED
File without changes
helpers/pptx_helper.py DELETED
@@ -1,540 +0,0 @@
1
- import logging
2
- import pathlib
3
- import re
4
- import tempfile
5
-
6
- from typing import List, Tuple
7
-
8
- import json5
9
- import pptx
10
- from pptx.enum.shapes import MSO_AUTO_SHAPE_TYPE
11
-
12
- from global_config import GlobalConfig
13
-
14
-
15
- # English Metric Unit (used by PowerPoint) to inches
16
- EMU_TO_INCH_SCALING_FACTOR = 1.0 / 914400
17
- INCHES_1_5 = pptx.util.Inches(1.5)
18
- INCHES_1 = pptx.util.Inches(1)
19
- INCHES_0_5 = pptx.util.Inches(0.5)
20
- INCHES_0_4 = pptx.util.Inches(0.4)
21
- INCHES_0_3 = pptx.util.Inches(0.3)
22
-
23
- STEP_BY_STEP_PROCESS_MARKER = '>> '
24
-
25
- PATTERN = re.compile(r"^slide[ ]+\d+:", re.IGNORECASE)
26
- SAMPLE_JSON_FOR_PPTX = '''
27
- {
28
- "title": "Understanding AI",
29
- "slides": [
30
- {
31
- "heading": "Introduction",
32
- "bullet_points": [
33
- "Brief overview of AI",
34
- [
35
- "Importance of understanding AI"
36
- ]
37
- ]
38
- }
39
- ]
40
- }
41
- '''
42
-
43
- logger = logging.getLogger(__name__)
44
-
45
-
46
- def remove_slide_number_from_heading(header: str) -> str:
47
- """
48
- Remove the slide number from a given slide header.
49
-
50
- :param header: The header of a slide.
51
- """
52
-
53
- if PATTERN.match(header):
54
- idx = header.find(':')
55
- header = header[idx + 1:]
56
-
57
- return header
58
-
59
-
60
- def generate_powerpoint_presentation(
61
- structured_data: str,
62
- slides_template: str,
63
- output_file_path: pathlib.Path
64
- ) -> List:
65
- """
66
- Create and save a PowerPoint presentation file containing the content in JSON format.
67
-
68
- :param structured_data: The presentation contents as "JSON" (may contain trailing commas).
69
- :param slides_template: The PPTX template to use.
70
- :param output_file_path: The path of the PPTX file to save as.
71
- :return A list of presentation title and slides headers.
72
- """
73
-
74
- # The structured "JSON" might contain trailing commas, so using json5
75
- parsed_data = json5.loads(structured_data)
76
-
77
- logger.debug(
78
- '*** Using PPTX template: %s',
79
- GlobalConfig.PPTX_TEMPLATE_FILES[slides_template]['file']
80
- )
81
- presentation = pptx.Presentation(GlobalConfig.PPTX_TEMPLATE_FILES[slides_template]['file'])
82
- slide_width_inch, slide_height_inch = _get_slide_width_height_inches(presentation)
83
-
84
- # The title slide
85
- title_slide_layout = presentation.slide_layouts[0]
86
- slide = presentation.slides.add_slide(title_slide_layout)
87
- title = slide.shapes.title
88
- subtitle = slide.placeholders[1]
89
- title.text = parsed_data['title']
90
- logger.info(
91
- 'PPT title: %s | #slides: %d',
92
- title.text, len(parsed_data['slides'])
93
- )
94
- subtitle.text = 'by Myself and SlideDeck AI :)'
95
- all_headers = [title.text, ]
96
-
97
- # Add content in a loop
98
- for a_slide in parsed_data['slides']:
99
- is_processing_done = _handle_double_col_layout(
100
- presentation=presentation,
101
- slide_json=a_slide,
102
- slide_width_inch=slide_width_inch,
103
- slide_height_inch=slide_height_inch
104
- )
105
-
106
- if not is_processing_done:
107
- is_processing_done = _handle_step_by_step_process(
108
- presentation=presentation,
109
- slide_json=a_slide,
110
- slide_width_inch=slide_width_inch,
111
- slide_height_inch=slide_height_inch
112
- )
113
-
114
- if not is_processing_done:
115
- _handle_default_display(
116
- presentation=presentation,
117
- slide_json=a_slide,
118
- slide_width_inch=slide_width_inch,
119
- slide_height_inch=slide_height_inch
120
- )
121
-
122
- # The thank-you slide
123
- last_slide_layout = presentation.slide_layouts[0]
124
- slide = presentation.slides.add_slide(last_slide_layout)
125
- title = slide.shapes.title
126
- title.text = 'Thank you!'
127
-
128
- presentation.save(output_file_path)
129
-
130
- return all_headers
131
-
132
-
133
- def get_flat_list_of_contents(items: list, level: int) -> List[Tuple]:
134
- """
135
- Flatten a (hierarchical) list of bullet points to a single list containing each item and
136
- its level.
137
-
138
- :param items: A bullet point (string or list).
139
- :param level: The current level of hierarchy.
140
- :return: A list of (bullet item text, hierarchical level) tuples.
141
- """
142
-
143
- flat_list = []
144
-
145
- for item in items:
146
- if isinstance(item, str):
147
- flat_list.append((item, level))
148
- elif isinstance(item, list):
149
- flat_list = flat_list + get_flat_list_of_contents(item, level + 1)
150
-
151
- return flat_list
152
-
153
-
154
- def _handle_default_display(
155
- presentation: pptx.Presentation,
156
- slide_json: dict,
157
- slide_width_inch: float,
158
- slide_height_inch: float
159
- ):
160
- """
161
- Display a list of text in a slide.
162
-
163
- :param presentation: The presentation object.
164
- :param slide_json: The content of the slide as JSON data.
165
- :param slide_width_inch: The width of the slide in inches.
166
- :param slide_height_inch: The height of the slide in inches.
167
- """
168
-
169
- bullet_slide_layout = presentation.slide_layouts[1]
170
- slide = presentation.slides.add_slide(bullet_slide_layout)
171
-
172
- shapes = slide.shapes
173
- title_shape = shapes.title
174
- body_shape = shapes.placeholders[1]
175
- title_shape.text = remove_slide_number_from_heading(slide_json['heading'])
176
- text_frame = body_shape.text_frame
177
-
178
- # The bullet_points may contain a nested hierarchy of JSON arrays
179
- # In some scenarios, it may contain objects (dictionaries) because the LLM generated so
180
- # ^ The second scenario is not covered
181
-
182
- flat_items_list = get_flat_list_of_contents(slide_json['bullet_points'], level=0)
183
-
184
- for idx, an_item in enumerate(flat_items_list):
185
- if idx == 0:
186
- text_frame.text = an_item[0].removeprefix(STEP_BY_STEP_PROCESS_MARKER)
187
- else:
188
- paragraph = text_frame.add_paragraph()
189
- paragraph.text = an_item[0].removeprefix(STEP_BY_STEP_PROCESS_MARKER)
190
- paragraph.level = an_item[1]
191
-
192
- _handle_key_message(
193
- the_slide=slide,
194
- slide_json=slide_json,
195
- slide_height_inch=slide_height_inch,
196
- slide_width_inch=slide_width_inch
197
- )
198
-
199
-
200
- def _handle_double_col_layout(
201
- presentation: pptx.Presentation(),
202
- slide_json: dict,
203
- slide_width_inch: float,
204
- slide_height_inch: float
205
- ) -> bool:
206
- """
207
- Add a slide with a double column layout for comparison.
208
-
209
- :param presentation: The presentation object.
210
- :param slide_json: The content of the slide as JSON data.
211
- :param slide_width_inch: The width of the slide in inches.
212
- :param slide_height_inch: The height of the slide in inches.
213
- :return: True if double col layout has been added; False otherwise.
214
- """
215
-
216
- if 'bullet_points' in slide_json and slide_json['bullet_points']:
217
- double_col_content = slide_json['bullet_points']
218
-
219
- if double_col_content and (
220
- len(double_col_content) == 2
221
- ) and isinstance(double_col_content[0], dict) and isinstance(double_col_content[1], dict):
222
- slide = presentation.slide_layouts[4]
223
- slide = presentation.slides.add_slide(slide)
224
-
225
- shapes = slide.shapes
226
- title_placeholder = shapes.title
227
- title_placeholder.text = remove_slide_number_from_heading(slide_json['heading'])
228
-
229
- left_heading, right_heading = shapes.placeholders[1], shapes.placeholders[3]
230
- left_col, right_col = shapes.placeholders[2], shapes.placeholders[4]
231
- left_col_frame, right_col_frame = left_col.text_frame, right_col.text_frame
232
-
233
- if 'heading' in double_col_content[0]:
234
- left_heading.text = double_col_content[0]['heading']
235
- if 'bullet_points' in double_col_content[0]:
236
- flat_items_list = get_flat_list_of_contents(
237
- double_col_content[0]['bullet_points'], level=0
238
- )
239
-
240
- for idx, an_item in enumerate(flat_items_list):
241
- if idx == 0:
242
- left_col_frame.text = an_item[0].removeprefix(STEP_BY_STEP_PROCESS_MARKER)
243
- else:
244
- paragraph = left_col_frame.add_paragraph()
245
- paragraph.text = an_item[0].removeprefix(STEP_BY_STEP_PROCESS_MARKER)
246
- paragraph.level = an_item[1]
247
-
248
- if 'heading' in double_col_content[1]:
249
- right_heading.text = double_col_content[1]['heading']
250
- if 'bullet_points' in double_col_content[1]:
251
- flat_items_list = get_flat_list_of_contents(
252
- double_col_content[1]['bullet_points'], level=0
253
- )
254
-
255
- for idx, an_item in enumerate(flat_items_list):
256
- if idx == 0:
257
- right_col_frame.text = an_item[0].removeprefix(STEP_BY_STEP_PROCESS_MARKER)
258
- else:
259
- paragraph = right_col_frame.add_paragraph()
260
- paragraph.text = an_item[0].removeprefix(STEP_BY_STEP_PROCESS_MARKER)
261
- paragraph.level = an_item[1]
262
-
263
- _handle_key_message(
264
- the_slide=slide,
265
- slide_json=slide_json,
266
- slide_height_inch=slide_height_inch,
267
- slide_width_inch=slide_width_inch
268
- )
269
-
270
- return True
271
-
272
- return False
273
-
274
-
275
- def _handle_step_by_step_process(
276
- presentation: pptx.Presentation,
277
- slide_json: dict,
278
- slide_width_inch: float,
279
- slide_height_inch: float
280
- ) -> bool:
281
- """
282
- Add shapes to display a step-by-step process in the slide, if available.
283
-
284
- :param presentation: The presentation object.
285
- :param slide_json: The content of the slide as JSON data.
286
- :param slide_width_inch: The width of the slide in inches.
287
- :param slide_height_inch: The height of the slide in inches.
288
- :return True if this slide has a step-by-step process depiction added; False otherwise.
289
- """
290
-
291
- if 'bullet_points' in slide_json and slide_json['bullet_points']:
292
- steps = slide_json['bullet_points']
293
-
294
- no_marker_count = 0.0
295
- n_steps = len(steps)
296
-
297
- # Ensure that it is a single list of strings without any sub-list
298
- for step in steps:
299
- if not isinstance(step, str):
300
- return False
301
-
302
- # In some cases, one or two steps may not begin with >>, e.g.:
303
- # {
304
- # "heading": "Step-by-Step Process: Creating a Legacy",
305
- # "bullet_points": [
306
- # "Identify your unique talents and passions",
307
- # ">> Develop your skills and knowledge",
308
- # ">> Create meaningful work",
309
- # ">> Share your work with the world",
310
- # ">> Continuously learn and adapt"
311
- # ],
312
- # "key_message": ""
313
- # },
314
- #
315
- # Use a threshold, e.g., at most 20%
316
- if not step.startswith(STEP_BY_STEP_PROCESS_MARKER):
317
- no_marker_count += 1
318
-
319
- slide_header = slide_json['heading'].lower()
320
- if (no_marker_count / n_steps > 0.25) and not (
321
- ('step-by-step' in slide_header) or ('step by step' in slide_header)
322
- ):
323
- return False
324
-
325
- bullet_slide_layout = presentation.slide_layouts[1]
326
- slide = presentation.slides.add_slide(bullet_slide_layout)
327
- shapes = slide.shapes
328
- shapes.title.text = remove_slide_number_from_heading(slide_json['heading'])
329
-
330
- if 3 <= n_steps <= 4:
331
- # Horizontal display
332
- height = INCHES_1_5
333
- width = pptx.util.Inches(slide_width_inch / n_steps - 0.01)
334
- top = pptx.util.Inches(slide_height_inch / 2)
335
- left = pptx.util.Inches((slide_width_inch - width.inches * n_steps) / 2 + 0.05)
336
-
337
- for step in steps:
338
- shape = shapes.add_shape(MSO_AUTO_SHAPE_TYPE.CHEVRON, left, top, width, height)
339
- shape.text = step.removeprefix(STEP_BY_STEP_PROCESS_MARKER)
340
- left += width - INCHES_0_4
341
- elif 4 < n_steps <= 6:
342
- # Vertical display
343
- height = pptx.util.Inches(0.65)
344
- top = pptx.util.Inches(slide_height_inch / 4)
345
- left = INCHES_1 # slide_width_inch - width.inches)
346
-
347
- # Find the close to median width, based on the length of each text, to be set
348
- # for the shapes
349
- width = pptx.util.Inches(slide_width_inch * 2 / 3)
350
- lengths = [len(step) for step in steps]
351
- font_size_20pt = pptx.util.Pt(20)
352
- widths = sorted(
353
- [
354
- min(
355
- pptx.util.Inches(font_size_20pt.inches * a_len),
356
- width
357
- ) for a_len in lengths
358
- ]
359
- )
360
- width = widths[len(widths) // 2]
361
-
362
- for step in steps:
363
- shape = shapes.add_shape(MSO_AUTO_SHAPE_TYPE.PENTAGON, left, top, width, height)
364
- shape.text = step.removeprefix(STEP_BY_STEP_PROCESS_MARKER)
365
- top += height + INCHES_0_3
366
- left += INCHES_0_5
367
- else:
368
- # Two steps -- probably not a process
369
- # More than 5--6 steps -- would likely cause a visual clutter
370
- return False
371
-
372
- return True
373
-
374
-
375
- def _handle_key_message(
376
- the_slide: pptx.slide.Slide,
377
- slide_json: dict,
378
- slide_width_inch: float,
379
- slide_height_inch: float
380
- ):
381
- """
382
- Add a shape to display the key message in the slide, if available.
383
-
384
- :param the_slide: The slide to be processed.
385
- :param slide_json: The content of the slide as JSON data.
386
- :param slide_width_inch: The width of the slide in inches.
387
- :param slide_height_inch: The height of the slide in inches.
388
- """
389
-
390
- if 'key_message' in slide_json and slide_json['key_message']:
391
- height = pptx.util.Inches(1.6)
392
- width = pptx.util.Inches(slide_width_inch / 2.3)
393
- top = pptx.util.Inches(slide_height_inch - height.inches - 0.1)
394
- left = pptx.util.Inches((slide_width_inch - width.inches) / 2)
395
- shape = the_slide.shapes.add_shape(
396
- MSO_AUTO_SHAPE_TYPE.ROUNDED_RECTANGLE,
397
- left=left,
398
- top=top,
399
- width=width,
400
- height=height
401
- )
402
- shape.text = slide_json['key_message']
403
-
404
-
405
- def _get_slide_width_height_inches(presentation: pptx.Presentation) -> Tuple[float, float]:
406
- """
407
- Get the dimensions of a slide in inches.
408
-
409
- :param presentation: The presentation object.
410
- :return: The width and the height.
411
- """
412
-
413
- slide_width_inch = EMU_TO_INCH_SCALING_FACTOR * presentation.slide_width
414
- slide_height_inch = EMU_TO_INCH_SCALING_FACTOR * presentation.slide_height
415
- # logger.debug('Slide width: %f, height: %f', slide_width_inch, slide_height_inch)
416
-
417
- return slide_width_inch, slide_height_inch
418
-
419
-
420
- if __name__ == '__main__':
421
- _JSON_DATA = '''
422
- {
423
- "title": "Understanding AI",
424
- "slides": [
425
- {
426
- "heading": "Introduction",
427
- "bullet_points": [
428
- "Brief overview of AI",
429
- [
430
- "Importance of understanding AI"
431
- ]
432
- ],
433
- "key_message": ""
434
- },
435
- {
436
- "heading": "What is AI?",
437
- "bullet_points": [
438
- "Definition of AI",
439
- [
440
- "Types of AI",
441
- [
442
- "Narrow or weak AI",
443
- "General or strong AI"
444
- ]
445
- ],
446
- "Differences between AI and machine learning"
447
- ],
448
- "key_message": ""
449
- },
450
- {
451
- "heading": "How AI Works",
452
- "bullet_points": [
453
- "Overview of AI algorithms",
454
- [
455
- "Types of AI algorithms",
456
- [
457
- "Rule-based systems",
458
- "Decision tree systems",
459
- "Neural networks"
460
- ]
461
- ],
462
- "How AI processes data"
463
- ],
464
- "key_message": ""
465
- },
466
- {
467
- "heading": "Building AI Models",
468
- "bullet_points": [
469
- ">> Collect data",
470
- ">> Select model or architecture to use",
471
- ">> Set appropriate parameters",
472
- ">> Train model with data",
473
- ">> Run inference",
474
- ],
475
- "key_message": ""
476
- },
477
- {
478
- "heading": "Pros and Cons: Deep Learning vs. Classical Machine Learning",
479
- "bullet_points": [
480
- {
481
- "heading": "Classical Machine Learning",
482
- "bullet_points": [
483
- "Interpretability: Easy to understand the model",
484
- "Faster Training: Quicker to train models",
485
- "Scalability: Can handle large datasets"
486
- ]
487
- },
488
- {
489
- "heading": "Deep Learning",
490
- "bullet_points": [
491
- "Handling Complex Data: Can learn from raw data",
492
- "Feature Extraction: Automatically learns features",
493
- "Improved Accuracy: Achieves higher accuracy"
494
- ]
495
- }
496
- ],
497
- "key_message": ""
498
- },
499
- {
500
- "heading": "Pros of AI",
501
- "bullet_points": [
502
- "Increased efficiency and productivity",
503
- "Improved accuracy and precision",
504
- "Enhanced decision-making capabilities",
505
- "Personalized experiences"
506
- ],
507
- "key_message": "AI can be used for many different purposes"
508
- },
509
- {
510
- "heading": "Cons of AI",
511
- "bullet_points": [
512
- "Job displacement and loss of employment",
513
- "Bias and discrimination",
514
- "Privacy and security concerns",
515
- "Dependence on technology"
516
- ],
517
- "key_message": ""
518
- },
519
- {
520
- "heading": "Future Prospects of AI",
521
- "bullet_points": [
522
- "Advancements in fields such as healthcare and finance",
523
- "Increased use"
524
- ],
525
- "key_message": ""
526
- }
527
- ]
528
- }'''
529
-
530
- temp = tempfile.NamedTemporaryFile(delete=False, suffix='.pptx')
531
- path = pathlib.Path(temp.name)
532
-
533
- generate_powerpoint_presentation(
534
- json5.loads(_JSON_DATA),
535
- output_file_path=path,
536
- slides_template='Basic'
537
- )
538
- print(f'File path: {path}')
539
-
540
- temp.close()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
helpers/text_helper.py DELETED
@@ -1,89 +0,0 @@
1
- import json_repair as jr
2
-
3
-
4
- def is_valid_prompt(prompt: str) -> bool:
5
- """
6
- Verify whether user input satisfies the concerned constraints.
7
-
8
- :param prompt: The user input text.
9
- :return: True if all criteria are satisfied; False otherwise.
10
- """
11
-
12
- if len(prompt) < 7 or ' ' not in prompt:
13
- return False
14
-
15
- return True
16
-
17
-
18
- def get_clean_json(json_str: str) -> str:
19
- """
20
- Attempt to clean a JSON response string from the LLM by removing the trailing ```
21
- and any text beyond that.
22
- CAUTION: May not be always accurate.
23
-
24
- :param json_str: The input string in JSON format.
25
- :return: The "cleaned" JSON string.
26
- """
27
-
28
- # An example of response containing JSON and other text:
29
- # {
30
- # "title": "AI and the Future: A Transformative Journey",
31
- # "slides": [
32
- # ...
33
- # ]
34
- # } <<---- This is end of valid JSON content
35
- # ```
36
- #
37
- # ```vbnet
38
- # Please note that the JSON output is in valid format but the content of the "Role of GPUs in AI" slide is just an example and may not be factually accurate. For accurate information, you should consult relevant resources and update the content accordingly.
39
- # ```
40
- response_cleaned = json_str
41
-
42
- while True:
43
- idx = json_str.rfind('```') # -1 on failure
44
-
45
- if idx <= 0:
46
- break
47
-
48
- # In the ideal scenario, the character before the last ``` should be
49
- # a new line or a closing bracket }
50
- prev_char = json_str[idx - 1]
51
-
52
- if (prev_char == '}') or (prev_char == '\n' and json_str[idx - 2] == '}'):
53
- response_cleaned = json_str[:idx]
54
-
55
- json_str = json_str[:idx]
56
-
57
- return response_cleaned
58
-
59
-
60
- def fix_malformed_json(json_str: str) -> str:
61
- """
62
- Try and fix the syntax error(s) in a JSON string.
63
-
64
- :param json_str: The input JSON string.
65
- :return: The fixed JSOn string.
66
- """
67
-
68
- return jr.repair_json(json_str, skip_json_loads=True)
69
-
70
-
71
- if __name__ == '__main__':
72
- json1 = '''{
73
- "key": "value"
74
- }
75
- '''
76
- json2 = '''["Reason": "Regular updates help protect against known vulnerabilities."]'''
77
- json3 = '''["Reason" Regular updates help protect against known vulnerabilities."]'''
78
- json4 = '''
79
- {"bullet_points": [
80
- ">> Write without stopping or editing",
81
- >> Set daily writing goals and stick to them,
82
- ">> Allow yourself to make mistakes"
83
- ],}
84
- '''
85
-
86
- print(fix_malformed_json(json1))
87
- print(fix_malformed_json(json2))
88
- print(fix_malformed_json(json3))
89
- print(fix_malformed_json(json4))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_templates/chat_prompts/initial_template.txt DELETED
@@ -1,41 +0,0 @@
1
- You are a helpful, intelligent chatbot. Create the slides for a presentation on the given topic.
2
- Include main headings for each slide, detailed bullet points for each slide.
3
- Add relevant content to each slide.
4
- The content of each slide should be verbose, descriptive, and very detailed.
5
- If relevant, add one or two examples to illustrate the concept.
6
- Unless explicitly specified with the topic, create about 10 slides.
7
-
8
-
9
- ### Topic:
10
- {question}
11
-
12
-
13
- The output must be only a valid and syntactically correct JSON adhering to the following schema:
14
- {{
15
- "title": "Presentation Title",
16
- "slides": [
17
- {{
18
- "heading": "Heading for the First Slide",
19
- "bullet_points": [
20
- "First bullet point",
21
- [
22
- "Sub-bullet point 1",
23
- "Sub-bullet point 2"
24
- ],
25
- "Second bullet point"
26
- ]
27
- }},
28
- {{
29
- "heading": "Heading for the Second Slide",
30
- "bullet_points": [
31
- "First bullet point",
32
- "Second bullet item",
33
- "Third bullet point"
34
- ]
35
- }}
36
- ]
37
- }}
38
-
39
-
40
- ### Output:
41
- ```json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_templates/chat_prompts/initial_template_v2_steps.txt DELETED
@@ -1,59 +0,0 @@
1
- You are a helpful, intelligent chatbot. Create the slides for a presentation on the given topic.
2
- Include main headings for each slide, detailed bullet points for each slide.
3
- Add relevant content to each slide.
4
- The content of each slide should be verbose, descriptive, and very detailed.
5
- If relevant, add one or two examples to illustrate the concept.
6
- Unless explicitly specified with the topic, create about 10 slides.
7
-
8
-
9
- ### Topic:
10
- {question}
11
-
12
-
13
- The output must be only a valid and syntactically correct JSON adhering to the following schema:
14
- {{
15
- "title": "Presentation Title",
16
- "slides": [
17
- {{
18
- "heading": "Heading for the First Slide",
19
- "bullet_points": [
20
- "First bullet point",
21
- [
22
- "Sub-bullet point 1",
23
- "Sub-bullet point 2"
24
- ],
25
- "Second bullet point"
26
- ],
27
- "key_message": ""
28
- }},
29
- {{
30
- "heading": "Heading for the Second Slide",
31
- "bullet_points": [
32
- "First bullet point",
33
- "Second bullet item",
34
- "Third bullet point"
35
- ],
36
- "key_message": "The key message conveyed in this slide"
37
- }},
38
- {{
39
- "heading": "A slide that describes a step-by-step/sequential process",
40
- "bullet_points": [
41
- ">> The first step of the process (begins with special marker >>)",
42
- ">> A second step (begins with >>)",
43
- ">> Third step",
44
- ],
45
- "key_message": ""
46
- }}
47
- ]
48
- }}
49
-
50
-
51
- ### Some more hints on the slide content and JSON output format:
52
- - For two or three important slides, generate the key message that those slides convey and assign
53
- them to the `key_message` elements of JSON output.
54
- - Identify if a slide describes a step-by-step/sequential process, then begin the bullet points
55
- with a special marker >>. Limit this to max two or three slides.
56
-
57
-
58
- ### Output:
59
- ```json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_templates/chat_prompts/initial_template_v3_two_cols.txt DELETED
@@ -1,78 +0,0 @@
1
- You are a helpful, intelligent chatbot. Create the slides for a presentation on the given topic.
2
- Include main headings for each slide, detailed bullet points for each slide.
3
- Add relevant content to each slide.
4
- The content of each slide should be VERBOSE, DESCRIPTIVE, and very DETAILED.
5
- If relevant, add one or two EXAMPLES to illustrate the concept.
6
- For two or three important slides, generate the key message that those slides convey.
7
- Identify if a slide describes a step-by-step/sequential process, then begin the bullet points with a special marker >>. Limit this to max two or three slides.
8
- Also, add at least one slide with a double column layout by generating appropriate content based on the description in the JSON schema provided below.
9
- ALWAYS add a concluding slide at the end, containing a list of the key takeways and an optional call-to-action if relevant to the context.
10
- Unless explicitly instructed, create 10 TO 12 SLIDES in total.
11
-
12
-
13
- ### Topic:
14
- {question}
15
-
16
-
17
- The output must be only a valid and syntactically correct JSON adhering to the following schema:
18
- {{
19
- "title": "Presentation Title",
20
- "slides": [
21
- {{
22
- "heading": "Heading for the First Slide",
23
- "bullet_points": [
24
- "First bullet point",
25
- [
26
- "Sub-bullet point 1",
27
- "Sub-bullet point 2"
28
- ],
29
- "Second bullet point"
30
- ],
31
- "key_message": ""
32
- }},
33
- {{
34
- "heading": "Heading for the Second Slide",
35
- "bullet_points": [
36
- "First bullet point",
37
- "Second bullet item",
38
- "Third bullet point"
39
- ],
40
- "key_message": "The key message conveyed in this slide"
41
- }},
42
- {{
43
- "heading": "A slide that describes a step-by-step/sequential process",
44
- "bullet_points": [
45
- ">> The first step of the process (begins with special marker >>)",
46
- ">> A second step (begins with >>)",
47
- ">> Third step",
48
- ],
49
- "key_message": ""
50
- }},
51
- {{
52
- "heading": "A slide with a double column layout (useful for side-by-side comparison/contrasting of two related concepts, e.g., pros & cons, advantages & risks, old approach vs. modern approach, and so on)",
53
- "bullet_points": [
54
- {{
55
- "heading": "Heading of the left column",
56
- "bullet_points": [
57
- "First bullet point",
58
- "Second bullet item",
59
- "Third bullet point"
60
- ]
61
- }},
62
- {{
63
- "heading": "Heading of the right column",
64
- "bullet_points": [
65
- "First bullet point",
66
- "Second bullet item",
67
- "Third bullet point"
68
- ]
69
- }}
70
- ],
71
- "key_message": ""
72
- }}
73
- ]
74
- }}
75
-
76
-
77
- ### Output:
78
- ```json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_templates/chat_prompts/refinement_template.txt DELETED
@@ -1,49 +0,0 @@
1
- You are a helpful, intelligent chatbot. You follow instructions to refine an existing slide deck.
2
- A list of user instructions is provided below in sequential order -- from the oldest to the latest.
3
- The previously generated content of the slide deck in JSON format is also provided.
4
- Follow the instructions to revise the content of the previously generated slides of the presentation on the given topic.
5
- Include main headings for each slide, detailed bullet points for each slide.
6
- Add relevant content to each slide.
7
- The content of the slides should be descriptive, verbose, and detailed.
8
- If relevant, add one or two examples to illustrate the concept.
9
- Unless explicitly specified with the topic, create about 10 slides.
10
- You also fix any syntax error that may be present in the JSON-formatted content.
11
-
12
-
13
- ### List of instructions:
14
- {instructions}
15
-
16
-
17
- ### Previously generated slide deck content as JSON:
18
- {previous_content}
19
-
20
-
21
- The output must be only a valid and syntactically correct JSON adhering to the following schema:
22
- {{
23
- "title": "Presentation Title",
24
- "slides": [
25
- {{
26
- "heading": "Heading for the First Slide",
27
- "bullet_points": [
28
- "First bullet point",
29
- [
30
- "Sub-bullet point 1",
31
- "Sub-bullet point 2"
32
- ],
33
- "Second bullet point"
34
- ]
35
- }},
36
- {{
37
- "heading": "Heading for the Second Slide",
38
- "bullet_points": [
39
- "First bullet point",
40
- "Second bullet item",
41
- "Third bullet point"
42
- ]
43
- }}
44
- ]
45
- }}
46
-
47
-
48
- ### Output:
49
- ```json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_templates/chat_prompts/refinement_template_v2_steps.txt DELETED
@@ -1,70 +0,0 @@
1
- You are a helpful, intelligent chatbot. You follow instructions to refine an existing slide deck.
2
- A list of user instructions is provided below in sequential order -- from the oldest to the latest.
3
- The previously generated content of the slide deck in JSON format is also provided.
4
- Follow the instructions to revise the content of the previously generated slides of the presentation on the given topic.
5
- Include main headings for each slide, detailed bullet points for each slide.
6
- Add relevant content to each slide.
7
- The content of the slides should be descriptive, verbose, and detailed.
8
- If relevant, add one or two examples to illustrate the concept.
9
- Unless explicitly specified with the topic, create about 10 slides.
10
- You also fix any syntax error that may be present in the JSON-formatted content.
11
-
12
- A slide that describes a step-by-step/sequential process begins the bullet points
13
- with a special marker >>
14
-
15
-
16
- ### List of instructions:
17
- {instructions}
18
-
19
-
20
- ### Previously generated slide deck content as JSON:
21
- {previous_content}
22
-
23
-
24
- The output must be only a valid and syntactically correct JSON adhering to the following schema:
25
- {{
26
- "title": "Presentation Title",
27
- "slides": [
28
- {{
29
- "heading": "Heading for the First Slide",
30
- "bullet_points": [
31
- "First bullet point",
32
- [
33
- "Sub-bullet point 1",
34
- "Sub-bullet point 2"
35
- ],
36
- "Second bullet point"
37
- ],
38
- "key_message": ""
39
- }},
40
- {{
41
- "heading": "Heading for the Second Slide",
42
- "bullet_points": [
43
- "First bullet point",
44
- "Second bullet item",
45
- "Third bullet point"
46
- ],
47
- "key_message": "The key message conveyed in this slide"
48
- }},
49
- {{
50
- "heading": "A slide that describes a step-by-step/sequential process",
51
- "bullet_points": [
52
- ">> The first step of the process (begins with special marker >>)",
53
- ">> A second step (begins with >>)",
54
- ">> Third step",
55
- ],
56
- "key_message": ""
57
- }}
58
- ]
59
- }}
60
-
61
-
62
- ### Some more hints on the slide content and JSON output format:
63
- - For two or three important slides, generate the key message that those slides convey and assign
64
- them to the `key_message` elements of JSON output.
65
- - Identify if a slide describes a step-by-step/sequential process, then begin the bullet points
66
- with a special marker >>. Limit this to max two or three slides.
67
-
68
-
69
- ### Output:
70
- ```json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_templates/chat_prompts/refinement_template_v3_two_cols.txt DELETED
@@ -1,85 +0,0 @@
1
- You are a helpful, intelligent chatbot. You follow instructions to refine an existing slide deck.
2
- A list of user instructions is provided below in sequential order -- from the oldest to the latest.
3
- The previously generated content of the slide deck in JSON format is also provided.
4
- Follow the instructions to revise the content of the previously generated slides of the presentation on the given topic.
5
- Include main headings for each slide, detailed bullet points for each slide.
6
- Add relevant content to each slide.
7
- The content of each slide should be VERBOSE, DESCRIPTIVE, and very DETAILED.
8
- If relevant, add one or two EXAMPLES to illustrate the concept.
9
- For two or three important slides, generate the key message that those slides convey.
10
- Identify if a slide describes a step-by-step/sequential process, then begin the bullet points with a special marker >>. Limit this to max two or three slides.
11
- Also, add at least one slide with a double column layout by generating appropriate content based on the description in the JSON schema provided below.
12
- ALWAYS add a concluding slide at the end, containing a list of the key takeways and an optional call-to-action if relevant to the context.
13
- Unless explicitly instructed, create 10 TO 12 SLIDES in total.
14
-
15
-
16
- ### List of instructions:
17
- {instructions}
18
-
19
-
20
- ### Previously generated slide deck content as JSON:
21
- {previous_content}
22
-
23
-
24
- The output must be only a valid and syntactically correct JSON adhering to the following schema:
25
- {{
26
- "title": "Presentation Title",
27
- "slides": [
28
- {{
29
- "heading": "Heading for the First Slide",
30
- "bullet_points": [
31
- "First bullet point",
32
- [
33
- "Sub-bullet point 1",
34
- "Sub-bullet point 2"
35
- ],
36
- "Second bullet point"
37
- ],
38
- "key_message": ""
39
- }},
40
- {{
41
- "heading": "Heading for the Second Slide",
42
- "bullet_points": [
43
- "First bullet point",
44
- "Second bullet item",
45
- "Third bullet point"
46
- ],
47
- "key_message": "The key message conveyed in this slide"
48
- }},
49
- {{
50
- "heading": "A slide that describes a step-by-step/sequential process",
51
- "bullet_points": [
52
- ">> The first step of the process (begins with special marker >>)",
53
- ">> A second step (begins with >>)",
54
- ">> Third step",
55
- ],
56
- "key_message": ""
57
- }},
58
- {{
59
- "heading": "A slide with a double column layout (useful for side-by-side comparison/contrasting of two related concepts, e.g., pros & cons, advantages & risks, old approach vs. modern approach, and so on)",
60
- "bullet_points": [
61
- {{
62
- "heading": "Heading of the left column",
63
- "bullet_points": [
64
- "First bullet point",
65
- "Second bullet item",
66
- "Third bullet point"
67
- ]
68
- }},
69
- {{
70
- "heading": "Heading of the right column",
71
- "bullet_points": [
72
- "First bullet point",
73
- "Second bullet item",
74
- "Third bullet point"
75
- ]
76
- }}
77
- ],
78
- "key_message": ""
79
- }}
80
- ]
81
- }}
82
-
83
-
84
- ### Output:
85
- ```json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
langchain_templates/template_combined.txt CHANGED
@@ -1,9 +1,5 @@
1
- You are a helpful, intelligent chatbot. Create the slides for a presentation on the given topic.
2
- Include main headings for each slide, detailed bullet points for each slide.
3
- Add relevant content to each slide.
4
- The content should be descriptive, verbose, and detailed as much as possible.
5
  If relevant, add one or two examples to illustrate the concept.
6
- Unless explicitly specified with the topic, create about 10 slides.
7
 
8
 
9
  Topic:
 
1
+ You are a helpful, intelligent chatbot. Create the slides for a presentation on the given topic. Include main headings for each slide, detailed bullet points for each slide. Add relevant content to each slide.
 
 
 
2
  If relevant, add one or two examples to illustrate the concept.
 
3
 
4
 
5
  Topic:
legacy_app.py DELETED
@@ -1,294 +0,0 @@
1
- import pathlib
2
- import logging
3
- import tempfile
4
- from typing import List, Tuple
5
-
6
- import json5
7
- import metaphor_python as metaphor
8
- import streamlit as st
9
-
10
- from helpers import llm_helper, pptx_helper
11
- from global_config import GlobalConfig
12
-
13
-
14
- APP_TEXT = json5.loads(open(GlobalConfig.APP_STRINGS_FILE, 'r', encoding='utf-8').read())
15
- GB_CONVERTER = 2 ** 30
16
-
17
-
18
- logger = logging.getLogger(__name__)
19
-
20
-
21
- @st.cache_data
22
- def get_contents_wrapper(text: str) -> str:
23
- """
24
- Fetch and cache the slide deck contents on a topic by calling an external API.
25
-
26
- :param text: The presentation topic.
27
- :return: The slide deck contents or outline in JSON format.
28
- """
29
-
30
- logger.info('LLM call because of cache miss...')
31
- return llm_helper.generate_slides_content(text).strip()
32
-
33
-
34
- @st.cache_resource
35
- def get_metaphor_client_wrapper() -> metaphor.Metaphor:
36
- """
37
- Create a Metaphor client for semantic Web search.
38
-
39
- :return: Metaphor instance.
40
- """
41
-
42
- return metaphor.Metaphor(api_key=GlobalConfig.METAPHOR_API_KEY)
43
-
44
-
45
- @st.cache_data
46
- def get_web_search_results_wrapper(text: str) -> List[Tuple[str, str]]:
47
- """
48
- Fetch and cache the Web search results on a given topic.
49
-
50
- :param text: The topic.
51
- :return: A list of (title, link) tuples.
52
- """
53
-
54
- results = []
55
- search_results = get_metaphor_client_wrapper().search(
56
- text,
57
- use_autoprompt=True,
58
- num_results=5
59
- )
60
-
61
- for a_result in search_results.results:
62
- results.append((a_result.title, a_result.url))
63
-
64
- return results
65
-
66
-
67
- def build_ui():
68
- """
69
- Display the input elements for content generation. Only covers the first step.
70
- """
71
-
72
- # get_disk_used_percentage()
73
-
74
- st.title(APP_TEXT['app_name'])
75
- st.subheader(APP_TEXT['caption'])
76
- st.markdown(
77
- 'Powered by'
78
- ' [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).'
79
- )
80
- st.markdown(
81
- '*If the JSON is generated or parsed incorrectly, try again later by making minor changes'
82
- ' to the input text.*'
83
- )
84
-
85
- with st.form('my_form'):
86
- # Topic input
87
- try:
88
- with open(GlobalConfig.PRELOAD_DATA_FILE, 'r', encoding='utf-8') as in_file:
89
- preload_data = json5.loads(in_file.read())
90
- except (FileExistsError, FileNotFoundError):
91
- preload_data = {'topic': '', 'audience': ''}
92
-
93
- topic = st.text_area(
94
- APP_TEXT['input_labels'][0],
95
- value=preload_data['topic']
96
- )
97
-
98
- texts = list(GlobalConfig.PPTX_TEMPLATE_FILES.keys())
99
- captions = [GlobalConfig.PPTX_TEMPLATE_FILES[x]['caption'] for x in texts]
100
-
101
- pptx_template = st.radio(
102
- 'Select a presentation template:',
103
- texts,
104
- captions=captions,
105
- horizontal=True
106
- )
107
-
108
- st.divider()
109
- submit = st.form_submit_button('Generate slide deck')
110
-
111
- if submit:
112
- # st.write(f'Clicked {time.time()}')
113
- st.session_state.submitted = True
114
-
115
- # https://github.com/streamlit/streamlit/issues/3832#issuecomment-1138994421
116
- if 'submitted' in st.session_state:
117
- progress_text = 'Generating the slides...give it a moment'
118
- progress_bar = st.progress(0, text=progress_text)
119
-
120
- topic_txt = topic.strip()
121
- generate_presentation(topic_txt, pptx_template, progress_bar)
122
-
123
- st.divider()
124
- st.text(APP_TEXT['tos'])
125
- st.text(APP_TEXT['tos2'])
126
-
127
- st.markdown(
128
- '![Visitors]'
129
- '(https://api.visitorbadge.io/api/visitors?path=https%3A%2F%2Fhuggingface.co%2Fspaces%2Fbarunsaha%2Fslide-deck-ai&countColor=%23263759)'
130
- )
131
-
132
-
133
- def generate_presentation(topic: str, pptx_template: str, progress_bar):
134
- """
135
- Process the inputs to generate the slides.
136
-
137
- :param topic: The presentation topic based on which contents are to be generated.
138
- :param pptx_template: The PowerPoint template name to be used.
139
- :param progress_bar: Progress bar from the page.
140
- """
141
-
142
- topic_length = len(topic)
143
- logger.debug('Input length:: topic: %s', topic_length)
144
-
145
- if topic_length >= 10:
146
- logger.debug('Topic: %s', topic)
147
- target_length = min(topic_length, GlobalConfig.LLM_MODEL_MAX_INPUT_LENGTH)
148
-
149
- try:
150
- # Step 1: Generate the contents in JSON format using an LLM
151
- json_str = process_slides_contents(topic[:target_length], progress_bar)
152
- logger.debug('Truncated topic: %s', topic[:target_length])
153
- logger.debug('Length of JSON: %d', len(json_str))
154
-
155
- # Step 2: Generate the slide deck based on the template specified
156
- if len(json_str) > 0:
157
- st.info(
158
- 'Tip: The generated content doesn\'t look so great?'
159
- ' Need alternatives? Just change your description text and try again.',
160
- icon="💡️"
161
- )
162
- else:
163
- st.error(
164
- 'Unfortunately, JSON generation failed, so the next steps would lead'
165
- ' to nowhere. Try again or come back later.'
166
- )
167
- return
168
-
169
- all_headers = generate_slide_deck(json_str, pptx_template, progress_bar)
170
-
171
- # Step 3: Bonus stuff: Web references and AI art
172
- show_bonus_stuff(all_headers)
173
-
174
- except ValueError as ve:
175
- st.error(f'Unfortunately, an error occurred: {ve}! '
176
- f'Please change the text, try again later, or report it, sharing your inputs.')
177
-
178
- else:
179
- st.error('Not enough information provided! Please be little more descriptive :)')
180
-
181
-
182
- def process_slides_contents(text: str, progress_bar: st.progress) -> str:
183
- """
184
- Convert given text into structured data and display. Update the UI.
185
-
186
- :param text: The topic description for the presentation.
187
- :param progress_bar: Progress bar for this step.
188
- :return: The contents as a JSON-formatted string.
189
- """
190
-
191
- json_str = ''
192
-
193
- try:
194
- logger.info('Calling LLM for content generation on the topic: %s', text)
195
- json_str = get_contents_wrapper(text)
196
- except Exception as ex:
197
- st.error(
198
- f'An exception occurred while trying to convert to JSON. It could be because of heavy'
199
- f' traffic or something else. Try doing it again or try again later.'
200
- f'\nError message: {ex}'
201
- )
202
-
203
- progress_bar.progress(50, text='Contents generated')
204
-
205
- with st.expander('The generated contents (in JSON format)'):
206
- st.code(json_str, language='json')
207
-
208
- return json_str
209
-
210
-
211
- def generate_slide_deck(json_str: str, pptx_template: str, progress_bar) -> List:
212
- """
213
- Create a slide deck.
214
-
215
- :param json_str: The contents in JSON format.
216
- :param pptx_template: The PPTX template name.
217
- :param progress_bar: Progress bar.
218
- :return: A list of all slide headers and the title.
219
- """
220
-
221
- progress_text = 'Creating the slide deck...give it a moment'
222
- progress_bar.progress(75, text=progress_text)
223
-
224
- # # Get a unique name for the file to save -- use the session ID
225
- # ctx = st_sr.get_script_run_ctx()
226
- # session_id = ctx.session_id
227
- # timestamp = time.time()
228
- # output_file_name = f'{session_id}_{timestamp}.pptx'
229
-
230
- temp = tempfile.NamedTemporaryFile(delete=False, suffix='.pptx')
231
- path = pathlib.Path(temp.name)
232
-
233
- logger.info('Creating PPTX file...')
234
- all_headers = pptx_helper.generate_powerpoint_presentation(
235
- json_str,
236
- slides_template=pptx_template,
237
- output_file_path=path
238
- )
239
- progress_bar.progress(100, text='Done!')
240
-
241
- with open(path, 'rb') as f:
242
- st.download_button('Download PPTX file', f, file_name='Presentation.pptx')
243
-
244
- if temp:
245
- temp.close()
246
-
247
- return all_headers
248
-
249
-
250
- def show_bonus_stuff(ppt_headers: List[str]):
251
- """
252
- Show bonus stuff for the presentation.
253
-
254
- :param ppt_headers: A list of the slide headings.
255
- """
256
-
257
- # Use the presentation title and the slide headers to find relevant info online
258
- logger.info('Calling Metaphor search...')
259
- ppt_text = ' '.join(ppt_headers)
260
- search_results = get_web_search_results_wrapper(ppt_text)
261
- md_text_items = []
262
-
263
- for (title, link) in search_results:
264
- md_text_items.append(f'[{title}]({link})')
265
-
266
- with st.expander('Related Web references'):
267
- st.markdown('\n\n'.join(md_text_items))
268
-
269
- logger.info('Done!')
270
-
271
- # # Avoid image generation. It costs time and an API call, so just limit to the text generation.
272
- # with st.expander('AI-generated image on the presentation topic'):
273
- # logger.info('Calling SDXL for image generation...')
274
- # # img_empty.write('')
275
- # # img_text.write(APP_TEXT['image_info'])
276
- # image = get_ai_image_wrapper(ppt_text)
277
- #
278
- # if len(image) > 0:
279
- # image = base64.b64decode(image)
280
- # st.image(image, caption=ppt_text)
281
- # st.info('Tip: Right-click on the image to save it.', icon="💡️")
282
- # logger.info('Image added')
283
-
284
-
285
- def main():
286
- """
287
- Trigger application run.
288
- """
289
-
290
- build_ui()
291
-
292
-
293
- if __name__ == '__main__':
294
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
helpers/llm_helper.py → llm_helper.py RENAMED
@@ -1,10 +1,5 @@
1
  import logging
2
  import requests
3
- from requests.adapters import HTTPAdapter
4
- from urllib3.util import Retry
5
-
6
- from langchain_community.llms.huggingface_endpoint import HuggingFaceEndpoint
7
- from langchain_core.language_models import LLM
8
 
9
  from global_config import GlobalConfig
10
 
@@ -12,58 +7,28 @@ from global_config import GlobalConfig
12
  HF_API_URL = f"https://api-inference.huggingface.co/models/{GlobalConfig.HF_LLM_MODEL_NAME}"
13
  HF_API_HEADERS = {"Authorization": f"Bearer {GlobalConfig.HUGGINGFACEHUB_API_TOKEN}"}
14
 
15
- logger = logging.getLogger(__name__)
16
-
17
- retries = Retry(
18
- total=5,
19
- backoff_factor=0.25,
20
- backoff_jitter=0.3,
21
- status_forcelist=[502, 503, 504],
22
- allowed_methods={'POST'},
23
  )
24
- adapter = HTTPAdapter(max_retries=retries)
25
- http_session = requests.Session()
26
- http_session.mount('https://', adapter)
27
- http_session.mount('http://', adapter)
28
-
29
-
30
- def get_hf_endpoint() -> LLM:
31
- """
32
- Get an LLM via the HuggingFaceEndpoint of LangChain.
33
-
34
- :return: The LLM.
35
- """
36
-
37
- logger.debug('Getting LLM via HF endpoint')
38
 
39
- return HuggingFaceEndpoint(
40
- repo_id=GlobalConfig.HF_LLM_MODEL_NAME,
41
- max_new_tokens=GlobalConfig.LLM_MODEL_MAX_OUTPUT_LENGTH,
42
- top_k=40,
43
- top_p=0.95,
44
- temperature=GlobalConfig.LLM_MODEL_TEMPERATURE,
45
- repetition_penalty=1.03,
46
- streaming=True,
47
- huggingfacehub_api_token=GlobalConfig.HUGGINGFACEHUB_API_TOKEN,
48
- return_full_text=False,
49
- stop_sequences=['</s>'],
50
- )
51
 
52
 
53
- def hf_api_query(payload: dict) -> dict:
54
  """
55
  Invoke HF inference end-point API.
56
 
57
- :param payload: The prompt for the LLM and related parameters.
58
- :return: The output from the LLM.
59
  """
60
 
61
  try:
62
- response = http_session.post(HF_API_URL, headers=HF_API_HEADERS, json=payload, timeout=15)
63
  result = response.json()
64
  except requests.exceptions.Timeout as te:
65
- logger.error('*** Error: hf_api_query timeout! %s', str(te))
66
- result = []
67
 
68
  return result
69
 
@@ -72,8 +37,8 @@ def generate_slides_content(topic: str) -> str:
72
  """
73
  Generate the outline/contents of slides for a presentation on a given topic.
74
 
75
- :param topic: Topic on which slides are to be generated.
76
- :return: The content in JSON format.
77
  """
78
 
79
  with open(GlobalConfig.SLIDES_TEMPLATE_FILE, 'r', encoding='utf-8') as in_file:
@@ -81,8 +46,8 @@ def generate_slides_content(topic: str) -> str:
81
  template_txt = template_txt.replace('<REPLACE_PLACEHOLDER>', topic)
82
 
83
  output = hf_api_query({
84
- 'inputs': template_txt,
85
- 'parameters': {
86
  'temperature': GlobalConfig.LLM_MODEL_TEMPERATURE,
87
  'min_length': GlobalConfig.LLM_MODEL_MIN_OUTPUT_LENGTH,
88
  'max_length': GlobalConfig.LLM_MODEL_MAX_OUTPUT_LENGTH,
@@ -91,7 +56,7 @@ def generate_slides_content(topic: str) -> str:
91
  'return_full_text': False,
92
  # "repetition_penalty": 0.0001
93
  },
94
- 'options': {
95
  'wait_for_model': True,
96
  'use_cache': True
97
  }
@@ -105,7 +70,7 @@ def generate_slides_content(topic: str) -> str:
105
  # logging.debug(f'{json_end_idx=}')
106
  output = output[:json_end_idx]
107
 
108
- logger.debug('generate_slides_content: output: %s', output)
109
 
110
  return output
111
 
 
1
  import logging
2
  import requests
 
 
 
 
 
3
 
4
  from global_config import GlobalConfig
5
 
 
7
  HF_API_URL = f"https://api-inference.huggingface.co/models/{GlobalConfig.HF_LLM_MODEL_NAME}"
8
  HF_API_HEADERS = {"Authorization": f"Bearer {GlobalConfig.HUGGINGFACEHUB_API_TOKEN}"}
9
 
10
+ logging.basicConfig(
11
+ level=GlobalConfig.LOG_LEVEL,
12
+ format='%(asctime)s - %(message)s',
 
 
 
 
 
13
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
+ # llm = None
 
 
 
 
 
 
 
 
 
 
 
16
 
17
 
18
+ def hf_api_query(payload: dict):
19
  """
20
  Invoke HF inference end-point API.
21
 
22
+ :param payload: The prompt for the LLM and related parameters
23
+ :return: The output from the LLM
24
  """
25
 
26
  try:
27
+ response = requests.post(HF_API_URL, headers=HF_API_HEADERS, json=payload, timeout=15)
28
  result = response.json()
29
  except requests.exceptions.Timeout as te:
30
+ logging.error('*** Error: hf_api_query timeout! %s', str(te))
31
+ result = {}
32
 
33
  return result
34
 
 
37
  """
38
  Generate the outline/contents of slides for a presentation on a given topic.
39
 
40
+ :param topic: Topic on which slides are to be generated
41
+ :return: The content in JSON format
42
  """
43
 
44
  with open(GlobalConfig.SLIDES_TEMPLATE_FILE, 'r', encoding='utf-8') as in_file:
 
46
  template_txt = template_txt.replace('<REPLACE_PLACEHOLDER>', topic)
47
 
48
  output = hf_api_query({
49
+ "inputs": template_txt,
50
+ "parameters": {
51
  'temperature': GlobalConfig.LLM_MODEL_TEMPERATURE,
52
  'min_length': GlobalConfig.LLM_MODEL_MIN_OUTPUT_LENGTH,
53
  'max_length': GlobalConfig.LLM_MODEL_MAX_OUTPUT_LENGTH,
 
56
  'return_full_text': False,
57
  # "repetition_penalty": 0.0001
58
  },
59
+ "options": {
60
  'wait_for_model': True,
61
  'use_cache': True
62
  }
 
70
  # logging.debug(f'{json_end_idx=}')
71
  output = output[:json_end_idx]
72
 
73
+ logging.debug('generate_slides_content: output: %s', output)
74
 
75
  return output
76
 
pptx_helper.py ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+ import pathlib
3
+ import re
4
+ import tempfile
5
+ from typing import List, Tuple
6
+
7
+ import json5
8
+ import pptx
9
+ import yaml
10
+
11
+ from global_config import GlobalConfig
12
+
13
+
14
+ PATTERN = re.compile(r"^slide[ ]+\d+:", re.IGNORECASE)
15
+ SAMPLE_JSON_FOR_PPTX = '''
16
+ {
17
+ "title": "Understanding AI",
18
+ "slides": [
19
+ {
20
+ "heading": "Introduction",
21
+ "bullet_points": [
22
+ "Brief overview of AI",
23
+ [
24
+ "Importance of understanding AI"
25
+ ]
26
+ ]
27
+ }
28
+ ]
29
+ }
30
+ '''
31
+
32
+ logging.basicConfig(
33
+ level=GlobalConfig.LOG_LEVEL,
34
+ format='%(asctime)s - %(message)s',
35
+ )
36
+
37
+
38
+ def remove_slide_number_from_heading(header: str) -> str:
39
+ """
40
+ Remove the slide number from a given slide header.
41
+
42
+ :param header: The header of a slide
43
+ """
44
+
45
+ if PATTERN.match(header):
46
+ idx = header.find(':')
47
+ header = header[idx + 1:]
48
+
49
+ return header
50
+
51
+
52
+ def generate_powerpoint_presentation(
53
+ structured_data: str,
54
+ as_yaml: bool,
55
+ slides_template: str,
56
+ output_file_path: pathlib.Path
57
+ ) -> List:
58
+ """
59
+ Create and save a PowerPoint presentation file containing the contents in JSON or YAML format.
60
+
61
+ :param structured_data: The presentation contents as "JSON" (may contain trailing commas) or
62
+ YAML
63
+ :param as_yaml: True if the input data is in YAML format; False if it is in JSON format
64
+ :param slides_template: The PPTX template to use
65
+ :param output_file_path: The path of the PPTX file to save as
66
+ :return A list of presentation title and slides headers
67
+ """
68
+
69
+ if as_yaml:
70
+ # Avoid YAML mode: nested bullets can lead to incorrect YAML generation
71
+ try:
72
+ parsed_data = yaml.safe_load(structured_data)
73
+ except yaml.parser.ParserError as ype:
74
+ logging.error('*** YAML parse error: %s', str(ype))
75
+ parsed_data = {'title': '', 'slides': []}
76
+ else:
77
+ # The structured "JSON" might contain trailing commas, so using json5
78
+ parsed_data = json5.loads(structured_data)
79
+
80
+ logging.debug(
81
+ "*** Using PPTX template: %s",
82
+ GlobalConfig.PPTX_TEMPLATE_FILES[slides_template]['file']
83
+ )
84
+ presentation = pptx.Presentation(GlobalConfig.PPTX_TEMPLATE_FILES[slides_template]['file'])
85
+
86
+ # The title slide
87
+ title_slide_layout = presentation.slide_layouts[0]
88
+ slide = presentation.slides.add_slide(title_slide_layout)
89
+ title = slide.shapes.title
90
+ subtitle = slide.placeholders[1]
91
+ title.text = parsed_data['title']
92
+ logging.debug('Presentation title is: %s', title.text)
93
+ subtitle.text = 'by Myself and SlideDeck AI :)'
94
+ all_headers = [title.text, ]
95
+
96
+ # background = slide.background
97
+ # background.fill.solid()
98
+ # background.fill.fore_color.rgb = RGBColor.from_string('C0C0C0') # Silver
99
+ # title.text_frame.paragraphs[0].font.color.rgb = RGBColor(0, 0, 128) # Navy blue
100
+
101
+ # Add contents in a loop
102
+ for a_slide in parsed_data['slides']:
103
+ bullet_slide_layout = presentation.slide_layouts[1]
104
+ slide = presentation.slides.add_slide(bullet_slide_layout)
105
+ shapes = slide.shapes
106
+
107
+ title_shape = shapes.title
108
+ body_shape = shapes.placeholders[1]
109
+ title_shape.text = remove_slide_number_from_heading(a_slide['heading'])
110
+ all_headers.append(title_shape.text)
111
+ text_frame = body_shape.text_frame
112
+
113
+ # The bullet_points may contain a nested hierarchy of JSON arrays
114
+ # In some scenarios, it may contain objects (dictionaries) because the LLM generated so
115
+ # ^ The second scenario is not covered
116
+
117
+ flat_items_list = get_flat_list_of_contents(a_slide['bullet_points'], level=0)
118
+
119
+ for an_item in flat_items_list:
120
+ paragraph = text_frame.add_paragraph()
121
+ paragraph.text = an_item[0]
122
+ paragraph.level = an_item[1]
123
+
124
+ # The thank-you slide
125
+ last_slide_layout = presentation.slide_layouts[0]
126
+ slide = presentation.slides.add_slide(last_slide_layout)
127
+ title = slide.shapes.title
128
+ title.text = 'Thank you!'
129
+
130
+ presentation.save(output_file_path)
131
+
132
+ return all_headers
133
+
134
+
135
+ def get_flat_list_of_contents(items: list, level: int) -> List[Tuple]:
136
+ """
137
+ Flatten a (hierarchical) list of bullet points to a single list containing each item and its level.
138
+
139
+ :param items: A bullet point (string or list)
140
+ :param level: The current level of hierarchy
141
+ :return: A list of (bullet item text, hierarchical level) tuples
142
+ """
143
+
144
+ flat_list = []
145
+
146
+ for item in items:
147
+ if isinstance(item, str):
148
+ flat_list.append((item, level))
149
+ elif isinstance(item, list):
150
+ flat_list = flat_list + get_flat_list_of_contents(item, level + 1)
151
+
152
+ return flat_list
153
+
154
+
155
+ if __name__ == '__main__':
156
+ # bullets = [
157
+ # 'Description',
158
+ # 'Types',
159
+ # [
160
+ # 'Type A',
161
+ # 'Type B'
162
+ # ],
163
+ # 'Grand parent',
164
+ # [
165
+ # 'Parent',
166
+ # [
167
+ # 'Grand child'
168
+ # ]
169
+ # ]
170
+ # ]
171
+
172
+ # output = get_flat_list_of_contents(bullets, level=0)
173
+ # for x in output:
174
+ # print(x)
175
+
176
+ json_data = '''
177
+ {
178
+ "title": "Understanding AI",
179
+ "slides": [
180
+ {
181
+ "heading": "Introduction",
182
+ "bullet_points": [
183
+ "Brief overview of AI",
184
+ [
185
+ "Importance of understanding AI"
186
+ ]
187
+ ]
188
+ },
189
+ {
190
+ "heading": "What is AI?",
191
+ "bullet_points": [
192
+ "Definition of AI",
193
+ [
194
+ "Types of AI",
195
+ [
196
+ "Narrow or weak AI",
197
+ "General or strong AI"
198
+ ]
199
+ ],
200
+ "Differences between AI and machine learning"
201
+ ]
202
+ },
203
+ {
204
+ "heading": "How AI Works",
205
+ "bullet_points": [
206
+ "Overview of AI algorithms",
207
+ [
208
+ "Types of AI algorithms",
209
+ [
210
+ "Rule-based systems",
211
+ "Decision tree systems",
212
+ "Neural networks"
213
+ ]
214
+ ],
215
+ "How AI processes data"
216
+ ]
217
+ },
218
+ {
219
+ "heading": "Pros of AI",
220
+ "bullet_points": [
221
+ "Increased efficiency and productivity",
222
+ "Improved accuracy and precision",
223
+ "Enhanced decision-making capabilities",
224
+ "Personalized experiences"
225
+ ]
226
+ },
227
+ {
228
+ "heading": "Cons of AI",
229
+ "bullet_points": [
230
+ "Job displacement and loss of employment",
231
+ "Bias and discrimination",
232
+ "Privacy and security concerns",
233
+ "Dependence on technology"
234
+ ]
235
+ },
236
+ {
237
+ "heading": "Future Prospects of AI",
238
+ "bullet_points": [
239
+ "Advancements in fields such as healthcare and finance",
240
+ "Increased use"
241
+ ]
242
+ }
243
+ ]
244
+ }'''
245
+
246
+ temp = tempfile.NamedTemporaryFile(delete=False, suffix='.pptx')
247
+ path = pathlib.Path(temp.name)
248
+
249
+ generate_powerpoint_presentation(
250
+ json5.loads(json_data),
251
+ as_yaml=False,
252
+ output_file_path=path,
253
+ slides_template='Blank'
254
+ )
pptx_templates/Blank.pptx CHANGED
Binary files a/pptx_templates/Blank.pptx and b/pptx_templates/Blank.pptx differ
 
pptx_templates/Ion_Boardroom.pptx CHANGED
Binary files a/pptx_templates/Ion_Boardroom.pptx and b/pptx_templates/Ion_Boardroom.pptx differ
 
pptx_templates/Urban_monochrome.pptx CHANGED
Binary files a/pptx_templates/Urban_monochrome.pptx and b/pptx_templates/Urban_monochrome.pptx differ
 
requirements.txt CHANGED
@@ -1,22 +1,12 @@
1
- aiohttp==3.9.5
2
  python-dotenv[cli]~=1.0.0
3
- gitpython==3.1.43
4
- json_repair==0.15.3
5
- idna==3.7
6
- jinja2==3.1.3
7
- Pillow==10.3.0
8
- pyarrow~=16.0.0
9
- pydantic==2.4.0
10
- langchain~=0.1.16
11
- langchain-core~=0.1.46
12
  streamlit~=1.32.2
 
13
 
14
  python-pptx
15
  metaphor-python
16
  json5~=0.9.14
17
- requests~=2.31.0
18
-
19
- transformers~=4.39.2
20
- langchain-community
21
-
22
- urllib3~=2.2.1
 
 
1
  python-dotenv[cli]~=1.0.0
2
+ langchain~=0.1.13
3
+ # huggingface_hub
 
 
 
 
 
 
 
4
  streamlit~=1.32.2
5
+ clarifai==9.7.4
6
 
7
  python-pptx
8
  metaphor-python
9
  json5~=0.9.14
10
+ PyYAML~=6.0.1
11
+ # curlify
12
+ requests~=2.31.0
 
 
 
slides_for_this_project_by_this_project/515fc765-4aaf-4485-a421-551363710c03_1693157001.5142696.pptx CHANGED
Binary files a/slides_for_this_project_by_this_project/515fc765-4aaf-4485-a421-551363710c03_1693157001.5142696.pptx and b/slides_for_this_project_by_this_project/515fc765-4aaf-4485-a421-551363710c03_1693157001.5142696.pptx differ
 
strings.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
- "app_name": ":green[SlideDeck AI $^{[Reloaded]}$]",
3
- "caption": "*Converse, create, and improve your next PowerPoint slide deck*",
4
  "section_headers": [
5
  "Step 1: Generate your content",
6
  "Step 2: Make it structured",
@@ -23,15 +23,7 @@
23
  ],
24
  "urls_info": "Here is a list of some online resources that you can consult for further information on this topic:",
25
  "image_info": "Got some more minutes? We are also trying to deliver an AI-generated art on the presentation topic, fresh off the studio, just for you!",
26
- "content_generation_error": "Unfortunately, SlideDeck AI failed to generate any content for you! Please try again later.",
27
- "json_parsing_error": "Unfortunately, SlideDeck AI failed to parse the response from LLM! Please try again by rephrasing the query or refreshing the page.",
28
  "tos": "SlideDeck AI is an experimental prototype, and it has its limitations.\nPlease carefully review any and all AI-generated content.",
29
- "tos2": "By using SlideDeck AI, you agree to fair and responsible usage.\nNo liability assumed by any party.",
30
- "ai_greetings": [
31
- "How may I help you today?",
32
- "Stuck with creating your presentation? Let me help you.",
33
- "Looks like you have a looming deadline. Can I help you get started with your slide deck?",
34
- "Hello! What topic do you have on your mind today?"
35
- ],
36
- "chat_placeholder": "Write the topic or instructions here"
37
  }
 
1
  {
2
+ "app_name": "SlideDeck AI",
3
+ "caption": "*:green[Co-create your next PowerPoint slide deck with AI]*",
4
  "section_headers": [
5
  "Step 1: Generate your content",
6
  "Step 2: Make it structured",
 
23
  ],
24
  "urls_info": "Here is a list of some online resources that you can consult for further information on this topic:",
25
  "image_info": "Got some more minutes? We are also trying to deliver an AI-generated art on the presentation topic, fresh off the studio, just for you!",
26
+ "content_generation_failure_error": "Unfortunately, SlideDeck AI failed to generate any content for you! Please try again later.",
 
27
  "tos": "SlideDeck AI is an experimental prototype, and it has its limitations.\nPlease carefully review any and all AI-generated content.",
28
+ "tos2": "By using SlideDeck AI, you agree to fair and responsible usage.\nNo liability assumed by any party."
 
 
 
 
 
 
 
29
  }