Spaces:
Runtime error
Runtime error
bhaskartripathi
commited on
Commit
•
c30c7ec
1
Parent(s):
e6a7a6e
Update app.py
Browse files
app.py
CHANGED
@@ -162,9 +162,7 @@ def question_answer(url, file, question,openAI_key):
|
|
162 |
recommender = SemanticSearch()
|
163 |
|
164 |
title = 'PDF GPT'
|
165 |
-
description = """
|
166 |
-
1. The problem is that Open AI has a 4K token limit and cannot take an entire PDF file as input. Additionally, it sometimes returns irrelevant responses due to poor embeddings. ChatGPT cannot directly talk to external data. The solution is PDF GPT, which allows you to chat with an uploaded PDF file using GPT functionalities. The application breaks the document into smaller chunks and generates embeddings using a powerful Deep Averaging Network Encoder. A semantic search is performed on your query, and the top relevant chunks are used to generate a response.
|
167 |
-
2. The returned response can even cite the page number in square brackets([]) where the information is located, adding credibility to the responses and helping to locate pertinent information quickly. The Responses are much better than the naive responses by Open AI."""
|
168 |
|
169 |
with gr.Blocks() as demo:
|
170 |
|
|
|
162 |
recommender = SemanticSearch()
|
163 |
|
164 |
title = 'PDF GPT'
|
165 |
+
description = """ PDF GPT allows you to chat with your PDF file using Universal Sentence Encoder and Open AI. It gives hallucination free response than other tools as the embeddings are better than OpenAI. The returned response can even cite the page number in square brackets([]) where the information is located, adding credibility to the responses and helping to locate pertinent information quickly."""
|
|
|
|
|
166 |
|
167 |
with gr.Blocks() as demo:
|
168 |
|