title: ChatPaper
emoji: π
colorFrom: pink
colorTo: purple
sdk: docker
sdk_version: 20.10.23
app_file: frontend.py
pinned: false
license: gpl-3.0
ChatPaper
Yet another paper reading assistant, similar as ChatPDF.
Setup
- Install dependencies (tested on Python 3.9)
pip install -r requirements.txt
- Setup GROBID local server
bash serve_grobid.sh
- Setup backend
python backend.py --port 5000 --host localhost
- Frontend
streamlit run frontend.py --server.port 8502 --server.host localhost
Demo Example
- Prepare an OpenAI API key and then upload a PDF to start chatting with the paper.
Implementation Details
Greedy Dynamic Context: Since the max token limit, we select the most relevant paragraphs in the pdf for each user query. Our model split the text input and output by the chatbot into four part: system_prompt (S), dynamic_source (D), user_query (Q), and model_answer(A). So upon each query, we first rank all the paragraphs by using a sentence_embedding model to calculate the similarity distance between the query embedding and all source embeddings. Then we compose the dynamic_source using a greedy method by to gradually push all relevant paragraphs (maintaing D <= MAX_TOKEN_LIMIT - Q - S - A - SOME_OVERHEAD).
Context Truncating: When context is too long, we now we simply pop out the first QA-pair.
TODO
- Context Condense: how to deal with long context? maybe we can tune a soft prompt to condense the context
- Poping context out based on similarity
References
- SciPDF Parser: https://github.com/titipata/scipdf_parser
- St-chat: https://github.com/AI-Yash/st-chat
- Sentence-transformers: https://github.com/UKPLab/sentence-transformers
- ChatGPT Chatbot Wrapper: https://github.com/acheong08/ChatGPT