Multi Agent Systems

#3
by awacke1 - opened

πŸ”Ž How can Multi Agent Systems be used in streamlit python and html5 with javascript to create a context prompt and document and search retrieval?
Multi Agent Systems (MAS) are a type of distributed artificial intelligence that involve multiple autonomous agents interacting with each other to achieve a common goal. In the context of streamlit python and html5 with javascript, MAS can be used to create a context prompt and document and search retrieval system.

To create a context prompt and document and search retrieval system using MAS in streamlit python and html5 with javascript, you can follow these steps:

Define the agents: The first step is to define the agents that will be used in the system. Each agent should have a specific role and set of responsibilities. For example, one agent could be responsible for collecting and indexing documents, while another agent could be responsible for processing user queries and retrieving relevant documents.
Implement the agents: Once the agents have been defined, you can implement them using python. Each agent should have a set of methods that allow it to perform its specific tasks. For example, the document collection and indexing agent could have methods for crawling websites, parsing HTML, and storing documents in a database.
Create the user interface: The next step is to create the user interface using streamlit and html5 with javascript. This interface should allow users to enter queries and view the results. You can use streamlit to create the layout and formatting of the interface, and html5 with javascript to add interactivity and dynamic elements.
Integrate the agents with the user interface: Once the user interface has been created, you can integrate the agents with the interface. This can be done by calling the methods of the agents from the user interface code. For example, when a user enters a query, the user interface can call the method of the query processing agent to retrieve relevant documents.
Test and refine the system: Finally, you should test and refine the system to ensure that it is working correctly. This can involve adding more agents or refining the existing agents to improve their performance.
Here is an example of how you could use MAS in streamlit python and html5 with javascript to create a context prompt and document and search retrieval system:

Python code for the agents:

class DocumentCollectionAgent:
def init(self):
self.documents = []

def crawl_website(self, url

πŸ©ΊπŸ” Search Results

14 Aug 2001 | Using Methods of Declarative Logic Programming for Intelligent Information Agents | ⬇️

T. Eiter, M. Fink, G. Sabbatini and H. Tompits

The search for information on the web is faced with several problems, which
arise on the one hand from the vast number of available sources, and on the
other hand from their heterogeneity. A promising approach is the use of
multi-agent systems of information agents, which cooperatively solve advanced
information-retrieval problems. This requires capabilities to address complex
tasks, such as search and assessment of sources, query planning, information
merging and fusion, dealing with incomplete information, and handling of
inconsistency. In this paper, our interest is in the role which some methods
from the field of declarative logic programming can play in the realization of
reasoning capabilities for information agents. In particular, we are interested
in how they can be used and further developed for the specific needs of this
application domain. We review some existing systems and current projects, which
address information-integration problems. We then focus on declarative
knowledge-representation methods, and review and evaluate approaches from logic
programming and nonmonotonic reasoning for information agents. We discuss
advantages and drawbacks, and point out possible extensions and open issues.


25 Dec 2023 | Knowledge Graph Prompting for Multi-Document Question Answering | ⬇️

Yu Wang, Nedim Lipka, Ryan A. Rossi, Alexa Siu, Ruiyi Zhang, Tyler Derr

The `pre-train, prompt, predict' paradigm of large language models (LLMs) has
achieved remarkable success in open-domain question answering (OD-QA). However,
few works explore this paradigm in the scenario of multi-document question
answering (MD-QA), a task demanding a thorough understanding of the logical
associations among the contents and structures of different documents. To fill
this crucial gap, we propose a Knowledge Graph Prompting (KGP) method to
formulate the right context in prompting LLMs for MD-QA, which consists of a
graph construction module and a graph traversal module. For graph construction,
we create a knowledge graph (KG) over multiple documents with nodes symbolizing
passages or document structures (e.g., pages/tables), and edges denoting the
semantic/lexical similarity between passages or intra-document structural
relations. For graph traversal, we design an LLM-based graph traversal agent
that navigates across nodes and gathers supporting passages assisting LLMs in
MD-QA. The constructed graph serves as the global ruler that regulates the
transitional space among passages and reduces retrieval latency. Concurrently,
the graph traversal agent acts as a local navigator that gathers pertinent
context to progressively approach the question and guarantee retrieval quality.
Extensive experiments underscore the efficacy of KGP for MD-QA, signifying the
potential of leveraging graphs in enhancing the prompt design for LLMs. Our
code: https://github.com/YuWVandy/KG-LLM-MDQA.


29 Oct 2023 | Keyword Augmented Retrieval: Novel framework for Information Retrieval integrated with speech interface | ⬇️

Anupam Purwar and Rahul Sundar

Retrieving answers in a quick and low cost manner without hallucinations from
a combination of structured and unstructured data using Language models is a
major hurdle. This is what prevents employment of Language models in knowledge
retrieval automation. This becomes accentuated when one wants to integrate a
speech interface on top of a text based knowledge retrieval system. Besides,
for commercial search and chat-bot applications, complete reliance on
commercial large language models (LLMs) like GPT 3.5 etc. can be very costly.
In the present study, the authors have addressed the aforementioned problem by
first developing a keyword based search framework which augments discovery of
the context from the document to be provided to the LLM. The keywords in turn
are generated by a relatively smaller LLM and cached for comparison with
keywords generated by the same smaller LLM against the query raised. This
significantly reduces time and cost to find the context within documents. Once
the context is set, a larger LLM uses that to provide answers based on a prompt
tailored for Q&A. This research work demonstrates that use of keywords in
context identification reduces the overall inference time and cost of
information retrieval. Given this reduction in inference time and cost with the
keyword augmented retrieval framework, a speech based interface for user input
and response readout was integrated. This allowed a seamless interaction with
the language model.


04 Dec 2019 | AMUSED: A Multi-Stream Vector Representation Method for Use in Natural Dialogue | ⬇️

Gaurav Kumar, Rishabh Joshi, Jaspreet Singh, Promod Yenigalla

The problem of building a coherent and non-monotonous conversational agent
with proper discourse and coverage is still an area of open research. Current
architectures only take care of semantic and contextual information for a given
query and fail to completely account for syntactic and external knowledge which
are crucial for generating responses in a chit-chat system. To overcome this
problem, we propose an end to end multi-stream deep learning architecture which
learns unified embeddings for query-response pairs by leveraging contextual
information from memory networks and syntactic information by incorporating
Graph Convolution Networks (GCN) over their dependency parse. A stream of this
network also utilizes transfer learning by pre-training a bidirectional
transformer to extract semantic representation for each input sentence and
incorporates external knowledge through the the neighborhood of the entities
from a Knowledge Base (KB). We benchmark these embeddings on next sentence
prediction task and significantly improve upon the existing techniques.
Furthermore, we use AMUSED to represent query and responses along with its
context to develop a retrieval based conversational agent which has been
validated by expert linguists to have comprehensive engagement with humans.


10 Mar 2020 | JS-son -- A Lean, Extensible JavaScript Agent Programming Library | ⬇️

Timotheus Kampik and Juan Carlos Nieves

A multitude of agent-oriented software engineering frameworks exist, most of
which are developed by the academic multi-agent systems community. However,
these frameworks often impose programming paradigms on their users that are
challenging to learn for engineers who are used to modern high-level
programming languages such as JavaScript and Python. To show how the adoption
of agent-oriented programming by the software engineering mainstream can be
facilitated, we provide a lean JavaScript library prototype for implementing
reasoning-loop agents. The library focuses on core agent programming concepts
and refrains from imposing further restrictions on the programming approach. To
illustrate its usefulness, we show how the library can be applied to
multi-agent systems simulations on the web, deployed to cloud-hosted
function-as-a-service environments, and embedded in Python-based data science
tools.


26 May 2023 | InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval | ⬇️

Vitor Jeronymo, Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, Jakub Zavrel, Rodrigo Nogueira

Recently, InPars introduced a method to efficiently use large language models
(LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced
to generate relevant queries for documents. These synthetic query-document
pairs can then be used to train a retriever. However, InPars and, more
recently, Promptagator, rely on proprietary LLMs such as GPT-3 and FLAN to
generate such datasets. In this work we introduce InPars-v2, a dataset
generator that uses open-source LLMs and existing powerful rerankers to select
synthetic query-document pairs for training. A simple BM25 retrieval pipeline
followed by a monoT5 reranker finetuned on InPars-v2 data achieves new
state-of-the-art results on the BEIR benchmark. To allow researchers to further
improve our method, we open source the code, synthetic data, and finetuned
models: https://github.com/zetaalphavector/inPars/tree/master/tpu


23 Jan 2023 | Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP | ⬇️

Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, Matei Zaharia

Retrieval-augmented in-context learning has emerged as a powerful approach
for addressing knowledge-intensive tasks using frozen language models (LM) and
retrieval models (RM). Existing work has combined these in simple
"retrieve-then-read" pipelines in which the RM retrieves passages that are
inserted into the LM prompt. To begin to fully realize the potential of frozen
LMs and RMs, we propose Demonstrate-Search-Predict (DSP), a framework that
relies on passing natural language texts in sophisticated pipelines between an
LM and an RM. DSP can express high-level programs that bootstrap pipeline-aware
demonstrations, search for relevant passages, and generate grounded
predictions, systematically breaking down problems into small transformations
that the LM and RM can handle more reliably. We have written novel DSP programs
for answering questions in open-domain, multi-hop, and conversational settings,
establishing in early evaluations new state-of-the-art in-context learning
results and delivering 37-120%, 8-39%, and 80-290% relative gains against the
vanilla LM (GPT-3.5), a standard retrieve-then-read pipeline, and a
contemporaneous self-ask pipeline, respectively. We release DSP at
https://github.com/stanfordnlp/dsp


01 Feb 2024 | Executable Code Actions Elicit Better LLM Agents | ⬇️

Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, Heng Ji

Large Language Model (LLM) agents, capable of performing a broad range of
actions, such as invoking tools and controlling robots, show great potential in
tackling real-world challenges. LLM agents are typically prompted to produce
actions by generating JSON or text in a pre-defined format, which is usually
limited by constrained action space (e.g., the scope of pre-defined tools) and
restricted flexibility (e.g., inability to compose multiple tools). This work
proposes to use executable Python code to consolidate LLM agents' actions into
a unified action space (CodeAct). Integrated with a Python interpreter, CodeAct
can execute code actions and dynamically revise prior actions or emit new
actions upon new observations through multi-turn interactions. Our extensive
analysis of 17 LLMs on API-Bank and a newly curated benchmark shows that
CodeAct outperforms widely used alternatives (up to 20% higher success rate).
The encouraging performance of CodeAct motivates us to build an open-source LLM
agent that interacts with environments by executing interpretable code and
collaborates with users using natural language. To this end, we collect an
instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn
interactions using CodeAct. We show that it can be used with existing data to
improve models in agent-oriented tasks without compromising their general
capability. CodeActAgent, finetuned from Llama2 and Mistral, is integrated with
Python interpreter and uniquely tailored to perform sophisticated tasks (e.g.,
model training) using existing libraries and autonomously self-debug.


17 May 2023 | DAMO-NLP at SemEval-2023 Task 2: A Unified Retrieval-augmented System for Multilingual Named Entity Recognition | ⬇️

Zeqi Tan, Shen Huang, Zixia Jia, Jiong Cai, Yinghui Li, Weiming Lu, Yueting Zhuang, Kewei Tu, Pengjun Xie, Fei Huang and Yong Jiang

The MultiCoNER \RNum{2} shared task aims to tackle multilingual named entity
recognition (NER) in fine-grained and noisy scenarios, and it inherits the
semantic ambiguity and low-context setting of the MultiCoNER \RNum{1} task. To
cope with these problems, the previous top systems in the MultiCoNER \RNum{1}
either incorporate the knowledge bases or gazetteers. However, they still
suffer from insufficient knowledge, limited context length, single retrieval
strategy. In this paper, our team \textbf{DAMO-NLP} proposes a unified
retrieval-augmented system (U-RaNER) for fine-grained multilingual NER. We
perform error analysis on the previous top systems and reveal that their
performance bottleneck lies in insufficient knowledge. Also, we discover that
the limited context length causes the retrieval knowledge to be invisible to
the model. To enhance the retrieval context, we incorporate the entity-centric
Wikidata knowledge base, while utilizing the infusion approach to broaden the
contextual scope of the model. Also, we explore various search strategies and
refine the quality of retrieval knowledge. Our system\footnote{We will release
the dataset, code, and scripts of our system at {\small
\url{https://github.com/modelscope/AdaSeq/tree/master/examples/U-RaNER}}.} wins
9 out of 13 tracks in the MultiCoNER \RNum{2} shared task. Additionally, we
compared our system with ChatGPT, one of the large language models which have
unlocked strong capabilities on many tasks. The results show that there is
still much room for improvement for ChatGPT on the extraction task.


31 Mar 2023 | Dense Sparse Retrieval: Using Sparse Language Models for Inference Efficient Dense Retrieval | ⬇️

Daniel Campos, ChengXiang Zhai

Vector-based retrieval systems have become a common staple for academic and
industrial search applications because they provide a simple and scalable way
of extending the search to leverage contextual representations for documents
and queries. As these vector-based systems rely on contextual language models,
their usage commonly requires GPUs, which can be expensive and difficult to
manage. Given recent advances in introducing sparsity into language models for
improved inference efficiency, in this paper, we study how sparse language
models can be used for dense retrieval to improve inference efficiency. Using
the popular retrieval library Tevatron and the MSMARCO, NQ, and TriviaQA
datasets, we find that sparse language models can be used as direct
replacements with little to no drop in accuracy and up to 4.3x improved
inference speeds


29 Feb 2024 | Query-OPT: Optimizing Inference of Large Language Models via Multi-Query Instructions in Meeting Summarization | ⬇️

Md Tahmid Rahman Laskar, Elena Khasanova, Xue-Yong Fu, Cheng Chen, Shashi Bhushan TN

This work focuses on the task of query-based meeting summarization in which
the summary of a context (meeting transcript) is generated in response to a
specific query. When using Large Language Models (LLMs) for this task, a new
call to the LLM inference endpoint/API is required for each new query even if
the context stays the same. However, repeated calls to the LLM inference
endpoints would significantly increase the costs of using them in production,
making LLMs impractical for many real-world use cases. To address this problem,
in this paper, we investigate whether combining the queries for the same input
context in a single prompt to minimize repeated calls can be successfully used
in meeting summarization. In this regard, we conduct extensive experiments by
comparing the performance of various popular LLMs: GPT-4, PaLM-2, LLaMA-2,
Mistral, and FLAN-T5 in single-query and multi-query settings. We observe that
while most LLMs tend to respond to the multi-query instructions, almost all of
them (except GPT-4), even after fine-tuning, could not properly generate the
response in the required output format. We conclude that while multi-query
prompting could be useful to optimize the inference costs by reducing calls to
the inference endpoints/APIs for the task of meeting summarization, this
capability to reliably generate the response in the expected format is only
limited to certain LLMs.


22 Jun 2016 | Emulating Human Conversations using Convolutional Neural Network-based IR | ⬇️

Abhay Prakash, Chris Brockett, Puneet Agrawal

Conversational agents ("bots") are beginning to be widely used in
conversational interfaces. To design a system that is capable of emulating
human-like interactions, a conversational layer that can serve as a fabric for
chat-like interaction with the agent is needed. In this paper, we introduce a
model that employs Information Retrieval by utilizing convolutional deep
structured semantic neural network-based features in the ranker to present
human-like responses in ongoing conversation with a user. In conversations,
accounting for context is critical to the retrieval model; we show that our
context-sensitive approach using a Convolutional Deep Structured Semantic Model
(cDSSM) with character trigrams significantly outperforms several conventional
baselines in terms of the relevance of responses retrieved.


09 Dec 2023 | ESPN: Memory-Efficient Multi-Vector Information Retrieval | ⬇️

Susav Shrestha, Narasimha Reddy, Zongwang Li

Recent advances in large language models have demonstrated remarkable
effectiveness in information retrieval (IR) tasks. While many neural IR systems
encode queries and documents into single-vector representations, multi-vector
models elevate the retrieval quality by producing multi-vector representations
and facilitating similarity searches at the granularity of individual tokens.
However, these models significantly amplify memory and storage requirements for
retrieval indices by an order of magnitude. This escalation in index size
renders the scalability of multi-vector IR models progressively challenging due
to their substantial memory demands. We introduce Embedding from Storage
Pipelined Network (ESPN) where we offload the entire re-ranking embedding
tables to SSDs and reduce the memory requirements by 5-16x. We design a
software prefetcher with hit rates exceeding 90%, improving SSD based retrieval
up to 6.4x, and demonstrate that we can maintain near memory levels of query
latency even for large query batch sizes.


21 Feb 2024 | Effective and Efficient Conversation Retrieval for Dialogue State Tracking with Implicit Text Summaries | ⬇️

Seanie Lee, Jianpeng Cheng, Joris Driesen, Alexandru Coca, Anders Johannsen

Few-shot dialogue state tracking (DST) with Large Language Models (LLM)
relies on an effective and efficient conversation retriever to find similar
in-context examples for prompt learning. Previous works use raw dialogue
context as search keys and queries, and a retriever is fine-tuned with
annotated dialogues to achieve superior performance. However, the approach is
less suited for scaling to new domains or new annotation languages, where
fine-tuning data is unavailable. To address this problem, we handle the task of
conversation retrieval based on text summaries of the conversations. A
LLM-based conversation summarizer is adopted for query and key generation,
which enables effective maximum inner product search. To avoid the extra
inference cost brought by LLM-based conversation summarization, we further
distill a light-weight conversation encoder which produces query embeddings
without decoding summaries for test conversations. We validate our retrieval
approach on MultiWOZ datasets with GPT-Neo-2.7B and LLaMA-7B/30B. The
experimental results show a significant improvement over relevant baselines in
real few-shot DST settings.


03 Nov 2019 | MRNN: A Multi-Resolution Neural Network with Duplex Attention for Document Retrieval in the Context of Question Answering | ⬇️

Tolgahan Cakaloglu, Xiaowei Xu

The primary goal of ad-hoc retrieval (document retrieval in the context of
question answering) is to find relevant documents satisfied the information
need posted in a natural language query. It requires a good understanding of
the query and all the documents in a corpus, which is difficult because the
meaning of natural language texts depends on the context, syntax,and semantics.
Recently deep neural networks have been used to rank search results in response
to a query. In this paper, we devise a multi-resolution neural network(MRNN) to
leverage the whole hierarchy of representations for document retrieval. The
proposed MRNN model is capable of deriving a representation that integrates
representations of different levels of abstraction from all the layers of the
learned hierarchical representation.Moreover, a duplex attention component is
designed to refinethe multi-resolution representation so that an optimal
contextfor matching the query and document can be determined. More
specifically, the first attention mechanism determines optimal context from the
learned multi-resolution representation for the query and document. The latter
attention mechanism aims to fine-tune the representation so that the query and
the relevant document are closer in proximity. The empirical study shows that
MRNN with the duplex attention is significantly superior to existing models
used for ad-hoc retrieval on benchmark datasets including SQuAD, WikiQA,
QUASAR, and TrecQA.


17 Nov 2023 | CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion | ⬇️

Yangruibo Ding, Zijian Wang, Wasi Uddin Ahmad, Hantian Ding, Ming Tan, Nihal Jain, Murali Krishna Ramanathan, Ramesh Nallapati, Parminder Bhatia, Dan Roth, Bing Xiang

Code completion models have made significant progress in recent years, yet
current popular evaluation datasets, such as HumanEval and MBPP, predominantly
focus on code completion tasks within a single file. This over-simplified
setting falls short of representing the real-world software development
scenario where repositories span multiple files with numerous cross-file
dependencies, and accessing and understanding cross-file context is often
required to complete the code correctly.
To fill in this gap, we propose CrossCodeEval, a diverse and multilingual
code completion benchmark that necessitates an in-depth cross-file contextual
understanding to complete the code accurately. CrossCodeEval is built on a
diverse set of real-world, open-sourced, permissively-licensed repositories in
four popular programming languages: Python, Java, TypeScript, and C#. To create
examples that strictly require cross-file context for accurate completion, we
propose a straightforward yet efficient static-analysis-based approach to
pinpoint the use of cross-file context within the current file.
Extensive experiments on state-of-the-art code language models like CodeGen
and StarCoder demonstrate that CrossCodeEval is extremely challenging when the
relevant cross-file context is absent, and we see clear improvements when
adding these context into the prompt. However, despite such improvements, the
pinnacle of performance remains notably unattained even with the
highest-performing model, indicating that CrossCodeEval is also capable of
assessing model's capability in leveraging extensive context to make better
code completion. Finally, we benchmarked various methods in retrieving
cross-file context, and show that CrossCodeEval can also be used to measure the
capability of code retrievers.


11 Apr 2023 | Decomposed Prompting: A Modular Approach for Solving Complex Tasks | ⬇️

Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, Ashish Sabharwal

Few-shot prompting is a surprisingly powerful way to use Large Language
Models (LLMs) to solve various tasks. However, this approach struggles as the
task complexity increases or when the individual reasoning steps of the task
themselves are hard to learn, especially when embedded in more complex tasks.
To address this, we propose Decomposed Prompting, a new approach to solve
complex tasks by decomposing them (via prompting) into simpler sub-tasks that
can be delegated to a library of prompting-based LLMs dedicated to these
sub-tasks. This modular structure allows each prompt to be optimized for its
specific sub-task, further decomposed if necessary, and even easily replaced
with more effective prompts, trained models, or symbolic functions if desired.
We show that the flexibility and modularity of Decomposed Prompting allows it
to outperform prior work on few-shot prompting using GPT3. On symbolic
reasoning tasks, we can further decompose sub-tasks that are hard for LLMs into
even simpler solvable sub-tasks. When the complexity comes from the input
length, we can recursively decompose the task into the same task but with
smaller inputs. We also evaluate our approach on textual multi-step reasoning
tasks: on long-context multi-hop QA task, we can more effectively teach the
sub-tasks via our separate sub-tasks prompts; and on open-domain multi-hop QA,
we can incorporate a symbolic information retrieval within our decomposition
framework, leading to improved performance on both tasks. Datasets, Code and
Prompts available at https://github.com/allenai/DecomP.


23 Jun 2023 | Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions | ⬇️

Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish Sabharwal

Prompting-based large language models (LLMs) are surprisingly powerful at
generating natural language reasoning steps or Chains-of-Thoughts (CoT) for
multi-step question answering (QA). They struggle, however, when the necessary
knowledge is either unavailable to the LLM or not up-to-date within its
parameters. While using the question to retrieve relevant text from an external
knowledge source helps LLMs, we observe that this one-step retrieve-and-read
approach is insufficient for multi-step QA. Here, \textit{what to retrieve}
depends on \textit{what has already been derived}, which in turn may depend on
\textit{what was previously retrieved}. To address this, we propose IRCoT, a
new approach for multi-step QA that interleaves retrieval with steps
(sentences) in a CoT, guiding the retrieval with CoT and in turn using
retrieved results to improve CoT. Using IRCoT with GPT3 substantially improves
retrieval (up to 21 points) as well as downstream QA (up to 15 points) on four
datasets: HotpotQA, 2WikiMultihopQA, MuSiQue, and IIRC. We observe similar
substantial gains in out-of-distribution (OOD) settings as well as with much
smaller models such as Flan-T5-large without additional training. IRCoT reduces
model hallucination, resulting in factually more accurate CoT reasoning. Code,
data, and prompts are available at \url{https://github.com/stonybrooknlp/ircot}


23 Feb 2023 | Coarse-to-Fine Knowledge Selection for Document Grounded Dialogs | ⬇️

Yeqin Zhang, Haomin Fu, Cheng Fu, Haiyang Yu, Yongbin Li, Cam-Tu Nguyen

Multi-document grounded dialogue systems (DGDS) belong to a class of
conversational agents that answer users' requests by finding supporting
knowledge from a collection of documents. Most previous studies aim to improve
the knowledge retrieval model or propose more effective ways to incorporate
external knowledge into a parametric generation model. These methods, however,
focus on retrieving knowledge from mono-granularity language units (e.g.
passages, sentences, or spans in documents), which is not enough to effectively
and efficiently capture precise knowledge in long documents. This paper
proposes Re3G, which aims to optimize both coarse-grained knowledge retrieval
and fine-grained knowledge extraction in a unified framework. Specifically, the
former efficiently finds relevant passages in a retrieval-and-reranking
process, whereas the latter effectively extracts finer-grain spans within those
passages to incorporate into a parametric answer generation model (BART, T5).
Experiments on DialDoc Shared Task demonstrate the effectiveness of our method.


20 Jul 2022 | RV4JaCa -- Runtime Verification for Multi-Agent Systems | ⬇️

Debora C. Engelmann (PUCRS and UniGe), Angelo Ferrando (UniGe), Alison R. Panisson (UFSC), Davide Ancona (UniGe), Rafael H. Bordini (PUCRS), Viviana Mascardi (UniGe)

This paper presents a Runtime Verification (RV) approach for Multi-Agent
Systems (MAS) using the JaCaMo framework. Our objective is to bring a layer of
security to the MAS. This layer is capable of controlling events during the
execution of the system without needing a specific implementation in the
behaviour of each agent to recognise the events. MAS have been used in the
context of hybrid intelligence. This use requires communication between
software agents and human beings. In some cases, communication takes place via
natural language dialogues. However, this kind of communication brings us to a
concern related to controlling the flow of dialogue so that agents can prevent
any change in the topic of discussion that could impair their reasoning. We
demonstrate the implementation of a monitor that aims to control this dialogue
flow in a MAS that communicates with the user through natural language to aid
decision-making in hospital bed allocation.

To create a context prompt and document search retrieval system using Multi-Agent Systems (MAS) in Streamlit Python and HTML5 with JavaScript, you can follow these steps:

Define the agents:

Multi-Agent System Agent DefinitionsClick to open code

Implement the agents:
Flesh out the methods for each agent class, implementing the specific functionalities such as web crawling, document parsing, indexing, query processing, and context generation.
Create the user interface:
Use Streamlit to create the layout and formatting of the interface. You can use HTML5 and JavaScript for additional interactivity.

Streamlit Interface with HTML5 and JavaScriptClick to open code

Integrate the agents with the user interface:
Call the methods of the agents from the Streamlit interface code. For example, when a user enters a query, call the query processing agent to retrieve relevant documents and the context generation agent to create a context prompt.
Implement the multi-agent system:
Create a main controller that coordinates the interactions between agents. This controller will manage the flow of information and decision-making process.

Multi-Agent System ControllerClick to open code

Test and refine the system:
Run the system and test it with various queries. Refine the agents and their interactions based on the results and performance.

To use this Multi-Agent System for context prompt and document search retrieval in Streamlit:

Implement each agent class with the necessary methods.
Create the Streamlit interface using the provided code as a starting point.
Implement the MASController to coordinate the agents.
Run the Streamlit app, which will use the MAS to process queries, retrieve documents, generate context, and display results.

This approach allows for a modular and extensible system where each agent can be improved or replaced independently. The use of Streamlit with HTML5 and JavaScript enables the creation of an interactive and user-friendly interface for the document retrieval system.

class MASController:
def init(self):
self.doc_agent = DocumentCollectionAgent()
self.query_agent = QueryProcessingAgent()
self.context_agent = ContextGenerationAgent()
self.ui_agent = UserInterfaceAgent()

def initialize_system(self):
    # Initialize document collection
    urls = ["https://example1.com", "https://example2.com"]
    for url in urls:
        content = self.doc_agent.crawl_website(url)
        parsed_doc = self.doc_agent.parse_document(content)
        self.doc_agent.index_document(parsed_doc)

    # Set up the query agent's index
    self.query_agent.index = self.doc_agent.get_index()

    # Initialize the UI
    self.ui_agent.create_interface()

def process_user_query(self, query):
    # Process the query
    processed_query = self.query_agent.process_query(query)

    # Retrieve relevant documents
    retrieved_docs = self.query_agent.retrieve_documents(processed_query)

    # Generate context
    context = self.context_agent.generate_context(query, retrieved_docs)

    # Display results
    self.ui_agent.display_results(context, retrieved_docs)

def run(self):
    self.initialize_system()
    while True:
        user_query = self.ui_agent.handle_user_input()
        if user_query:
            self.process_user_query(user_query)

if name == "main":
controller = MASController()
controller.run()

πŸ€– Multi-Agent System for Context Prompt and Document Retrieval in Streamlit
πŸ” Original Question
How can Multi-Agent Systems be used in Streamlit Python and HTML5 with JavaScript to create a context prompt and document search retrieval system?
πŸ“š Research Summary and Implementation Guide

  1. πŸ—οΈ System Architecture Design

Define the overall structure of the Multi-Agent System (MAS)
Identify key components: Document Collection, Query Processing, Context Generation, and User Interface

  1. πŸ€– Agent Definition

Specify roles and responsibilities for each agent type
Create Python classes for DocumentCollectionAgent, QueryProcessingAgent, ContextGenerationAgent, and UserInterfaceAgent

  1. πŸ•·οΈ Document Collection Agent Implementation

Develop web crawling functionality
Implement document parsing and indexing methods
Store collected documents in a suitable data structure

  1. 🧠 Query Processing Agent Development

Design query processing algorithms
Implement document retrieval based on processed queries
Optimize search efficiency using appropriate indexing techniques

  1. πŸ’‘ Context Generation Agent Creation

Integrate a language model for context generation
Develop methods to analyze retrieved documents and generate relevant context
Ensure context is coherent and useful for the user's query

  1. πŸ–₯️ User Interface Agent with Streamlit

Create a Streamlit app for the main interface
Design input fields for user queries
Implement result display functionality

  1. πŸ”— HTML5 and JavaScript Integration

Develop custom HTML components for enhanced interactivity
Create JavaScript functions for dynamic content updates
Ensure seamless integration with Streamlit using components.html()

  1. 🎭 Multi-Agent System Controller

Implement a central controller class (MASController)
Manage agent interactions and information flow
Coordinate the overall system operation

  1. πŸš€ System Initialization

Set up initial document collection and indexing
Initialize all agents with necessary data and connections
Prepare the user interface for first user interaction

  1. πŸ”„ Query Processing Workflow

Implement the flow from user input to query processing
Ensure efficient communication between Query Processing Agent and Document Collection Agent
Optimize retrieval speed and relevance

  1. πŸ“Š Result Ranking and Presentation

Develop algorithms for ranking retrieved documents
Create visually appealing result displays in Streamlit
Implement pagination or infinite scrolling for large result sets

  1. 🧩 Context Prompt Generation

Fine-tune context generation based on user query and retrieved documents
Implement methods to summarize or highlight key information
Ensure generated context adds value to the user's search experience

  1. πŸ’Ύ Caching and Performance Optimization

Implement caching mechanisms for frequent queries
Optimize database queries and document retrieval
Utilize Streamlit's caching capabilities for improved performance

  1. πŸ”’ Security and Error Handling

Implement input sanitization and validation
Develop robust error handling and logging
Ensure system stability and security

  1. 🌐 Scalability Considerations

Design the system to handle increasing document collections
Implement load balancing for multi-user scenarios
Consider distributed computing options for large-scale deployments

  1. πŸ“± Responsive Design

Ensure the Streamlit interface is mobile-friendly
Implement responsive HTML and CSS for custom components
Test on various devices and screen sizes

  1. πŸ” Advanced Search Features

Implement filters, faceted search, and advanced query options
Develop auto-suggestion and auto-complete functionality
Consider semantic search capabilities

  1. πŸ“ˆ Analytics and Feedback Loop

Implement usage analytics to track popular queries and documents
Develop a feedback mechanism for users to rate search results
Use collected data to continuously improve the system

  1. πŸ§ͺ Testing and Quality Assurance

Develop unit tests for each agent and component
Implement integration tests for the entire system
Perform user acceptance testing and gather feedback

  1. πŸ“š Documentation and Deployment

Create comprehensive documentation for the system
Prepare deployment scripts and environment configurations
Set up monitoring and maintenance procedures

πŸš€ Next Steps

Begin implementing each component following the outlined steps
Regularly test and iterate on each part of the system
Gather user feedback and continuously improve the user experience
Consider expanding the system with additional features and capabilities

graph TD
A[πŸ—οΈ System Architecture] --> B[πŸ€– Agent Definition]
B --> C[πŸ•·οΈ Document Collection Agent]
B --> D[🧠 Query Processing Agent]
B --> E[πŸ’‘ Context Generation Agent]
B --> F[πŸ–₯️ User Interface Agent]
F --> G[πŸ”— HTML5 & JavaScript]
C --> H[🎭 MAS Controller]
D --> H
E --> H
F --> H
H --> I[πŸš€ System Initialization]
I --> J[πŸ”„ Query Processing]
J --> K[πŸ“Š Result Ranking]
K --> L[🧩 Context Generation]
L --> M[πŸ’Ύ Optimization]
M --> N[πŸ”’ Security]
N --> O[🌐 Scalability]
O --> P[πŸ“± Responsive Design]
P --> Q[πŸ” Advanced Search]
Q --> R[πŸ“ˆ Analytics]
R --> S[πŸ§ͺ Testing]
S --> T[πŸ“š Documentation & Deployment]

graph TD
MAS[Multi-Agent System Development]
MAS --> Design[System Design]
MAS --> Implementation[Implementation]
MAS --> Integration[Integration]
MAS --> Deployment[Deployment]

Design --> ArchitectureDesign[Architecture Design]
Design --> AgentDefinition[Agent Definition]
Design --> Interfaces[Interface Design]

ArchitectureDesign --> Components[Identify Key Components]
Components --> DC[Document Collection]
Components --> QP[Query Processing]
Components --> CG[Context Generation]
Components --> UI[User Interface]

AgentDefinition --> Roles[Define Agent Roles]
Roles --> DCAgent[Document Collection Agent]
Roles --> QPAgent[Query Processing Agent]
Roles --> CGAgent[Context Generation Agent]
Roles --> UIAgent[User Interface Agent]

Implementation --> AgentImplementation[Agent Implementation]
AgentImplementation --> DCImpl[Implement Document Collection]
AgentImplementation --> QPImpl[Implement Query Processing]
AgentImplementation --> CGImpl[Implement Context Generation]
AgentImplementation --> UIImpl[Implement User Interface]

DCImpl --> Crawling[Web Crawling]
DCImpl --> Parsing[Document Parsing]
DCImpl --> Indexing[Document Indexing]

QPImpl --> QueryAlgo[Query Processing Algorithms]
QPImpl --> Retrieval[Document Retrieval]
QPImpl --> SearchOpt[Search Optimization]

CGImpl --> LMIntegration[Language Model Integration]
CGImpl --> ContextAnalysis[Context Analysis]
CGImpl --> ContextGeneration[Context Generation]

UIImpl --> StreamlitApp[Streamlit Application]
UIImpl --> InputFields[Query Input Fields]
UIImpl --> ResultDisplay[Result Display]

Integration --> HTML5JS[HTML5 & JavaScript Integration]
Integration --> Controller[MAS Controller Implementation]

HTML5JS --> CustomComponents[Custom HTML Components]
HTML5JS --> DynamicUpdates[Dynamic Content Updates]
HTML5JS --> StreamlitIntegration[Streamlit Integration]

Controller --> AgentInteraction[Manage Agent Interactions]
Controller --> DataFlow[Coordinate Data Flow]
Controller --> SystemOperation[Oversee System Operation]

Deployment --> SystemInit[System Initialization]
Deployment --> WorkflowImpl[Workflow Implementation]
Deployment --> Optimization[System Optimization]

SystemInit --> InitialCollection[Initial Document Collection]
SystemInit --> AgentSetup[Agent Setup]
SystemInit --> UIPrep[UI Preparation]

WorkflowImpl --> QueryFlow[Query Processing Flow]
WorkflowImpl --> ResultRanking[Result Ranking]
WorkflowImpl --> ContextPrompt[Context Prompt Generation]

Optimization --> Caching[Implement Caching]
Optimization --> PerformanceOpt[Performance Optimization]
Optimization --> Security[Security Implementation]
Optimization --> Scalability[Scalability Considerations]
Optimization --> ResponsiveDesign[Responsive Design]
Optimization --> AdvancedFeatures[Advanced Search Features]
Optimization --> Analytics[Analytics & Feedback]

MAS --> Testing[Testing & QA]
Testing --> UnitTesting[Unit Testing]
Testing --> IntegrationTesting[Integration Testing]
Testing --> UserTesting[User Acceptance Testing]

MAS --> Documentation[Documentation]
Documentation --> SystemDocs[System Documentation]
Documentation --> DeploymentScripts[Deployment Scripts]
Documentation --> Monitoring[Monitoring Setup]

Create a high detail image slide from my mermaid model.

image.png

image.png

image.png

Sign up or log in to comment