tykiww commited on
Commit
c66f8c0
·
verified ·
1 Parent(s): 82e0a0e

Update layout/about.md

Browse files
Files changed (1) hide show
  1. layout/about.md +5 -3
layout/about.md CHANGED
@@ -5,11 +5,13 @@
5
 
6
  ## What?
7
 
8
- This Gradio app is a demo showcasing a meeting Q&A application that retrieves multiple vtt transcripts, uploads them into pinecone as storage, and answers questions using the [Llama3.1 model](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). Unfortunately, The lack of a persistent GPU on Hugginface Zero spaces posed some challenges in using a [fine tuned model](https://huggingface.co/tykiww/llama3-8b-meetingQA) based on instruction tuned alpaca datasets and a noisy synthetic dataset of over 3000+ product, technical, and academic meetings. However, the outputs should still prove a massive improvement over the base Llama3 family of models.
 
 
9
 
10
  ## Why?
11
 
12
- The goal of the demo is to show you how RAG, prompt-engineering, and fine-tuning can all come together to enhance specific use-cases like meeting querying. This Q&A service seeks to look beyond "summarization" and "next steps" to create a customizable parser that can extract user-defined questions for enhanced specificity.
13
 
14
  ## How?
15
 
@@ -19,7 +21,7 @@ Just start by following the guide below:
19
  2) ⤮ Wait for your file to be stored in the vector database.
20
  3) ❓ Query the meeting!
21
 
22
- Or, just skip right to step 3 since there are already some meetings in the database to query from!
23
 
24
 
25
  This demo is just a peek and is subject to a demand queue. More to come!
 
5
 
6
  ## What?
7
 
8
+ This app is a demo showcasing a meeting Q&A application that retrieves multiple vtt transcripts, uploads them into pinecone as storage, and answers questions using the [Llama3.1 model](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
9
+
10
+ Unfortunately, The lack of a persistent GPU on Hugginface Zero spaces posed some challenges in using a [fine tuned model](https://huggingface.co/tykiww/llama3-8b-meetingQA) based on instruction tuned alpaca datasets and a noisy synthetic dataset of over 3000+ product, technical, and academic meetings. However, the outputs should still prove a massive improvement over the base Llama3 family of models.
11
 
12
  ## Why?
13
 
14
+ **The value** of a tool like this is the ability to retrieve only the most necessary context and analysis from *multiple* documents. This means that you can easily scale the information and question retrieval around almost any problem structure (think stringing together 50+ documents and then chaining the LLMs to attack multiple structured questions in providing a tailored report).
15
 
16
  ## How?
17
 
 
21
  2) ⤮ Wait for your file to be stored in the vector database.
22
  3) ❓ Query the meeting!
23
 
24
+ Or, just skip ⏭️ right to step 3 since there are already some meetings in the database to query from!
25
 
26
 
27
  This demo is just a peek and is subject to a demand queue. More to come!