Spaces:
Sleeping
Sleeping
JarvisChan630
commited on
Commit
•
67b3290
1
Parent(s):
75309ed
add png
Browse files- README.md +73 -8
- agents/meta_agent.py +5 -5
README.md
CHANGED
@@ -2,6 +2,46 @@
|
|
2 |
|
3 |
A project for versatile AI agents that can run with proprietary models or completely open-source. The meta expert has two agents: a basic [Meta Agent](Docs/Meta-Prompting%20Overview.MD), and [Jar3d](Docs/Introduction%20to%20Jar3d.MD), a more sophisticated and versatile agent.
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
## Table of Contents
|
6 |
|
7 |
1. [Core Concepts](#core-concepts)
|
@@ -19,30 +59,55 @@ A project for versatile AI agents that can run with proprietary models or comple
|
|
19 |
|
20 |
This project leverages four core concepts:
|
21 |
|
22 |
-
1. Meta prompting
|
23 |
-
2. Chain of Reasoning
|
24 |
3. [Jar3d](#setup-for-jar3d) uses retrieval augmented generation, which isn't used within the [Basic Meta Agent](#setup-for-basic-meta-agent). Read our notes on [Overview of Agentic RAG](Docs/Overview%20of%20Agentic%20RAG.MD).
|
25 |
-
4. Jar3d can generate knowledge graphs from web-pages allowing it to produce more comprehensive outputs.
|
26 |
|
27 |
## Prerequisites
|
28 |
|
29 |
-
|
|
|
|
|
|
|
|
|
30 |
```bash
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
32 |
```
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
- [Docker](https://www.docker.com/get-started)
|
36 |
- [Docker Compose](https://docs.docker.com/compose/install/)
|
37 |
|
38 |
-
|
39 |
- [Neo4j Aura](https://neo4j.com/)
|
40 |
|
41 |
## Configuration
|
42 |
|
43 |
1. Navigate to the Repository:
|
44 |
```bash
|
45 |
-
cd /path/to/your-repo/
|
46 |
```
|
47 |
|
48 |
2. Open the `config.yaml` file:
|
|
|
2 |
|
3 |
A project for versatile AI agents that can run with proprietary models or completely open-source. The meta expert has two agents: a basic [Meta Agent](Docs/Meta-Prompting%20Overview.MD), and [Jar3d](Docs/Introduction%20to%20Jar3d.MD), a more sophisticated and versatile agent.
|
4 |
|
5 |
+
Act as an opne source perplexity.
|
6 |
+
|
7 |
+
Thanks John Adeojo, who brings this wonderful project to open source community!
|
8 |
+
|
9 |
+
## PMF - What problem this project has solved?
|
10 |
+
|
11 |
+
## Technical Detail
|
12 |
+
What is the logics?
|
13 |
+
|
14 |
+
LLM Application Workflow
|
15 |
+
1. User Query: The user initiates the interaction by submitting a query or request for information.
|
16 |
+
2. Agent Accesses the Internet: The agent retrieves relevant information from various online sources, such as web pages, articles, and databases.
|
17 |
+
3. Document Chunking: The retrieved URLs are processed to break down the content into smaller, manageable documents or chunks. This step ensures that the information is more digestible and can be analyzed effectively.
|
18 |
+
4. Vectorization: Each document chunk is then transformed into a multi-dimensional embedding using vectorization techniques. This process captures the semantic meaning of the text, allowing for nuanced comparisons between different pieces of information.
|
19 |
+
5. Similarity Search: A similarity search is performed using cosine similarity (or another appropriate metric) to identify and rank the most relevant document chunks in relation to the original user query. This step helps in finding the closest matches based on the embeddings generated earlier.
|
20 |
+
6. Response Generation: Finally, the most relevant chunks are selected, and the LLM synthesizes them into a coherent response that directly addresses the user's query.
|
21 |
+
|
22 |
+
## Bullet points
|
23 |
+
- By implemented RAG, Chain-of-Reasoning, and Meta-Prompting to complete long-running research tasks.
|
24 |
+
|
25 |
+
- Neo4j Knowledge Graphs
|
26 |
+
-Why use this?
|
27 |
+
naive RAG:
|
28 |
+
![naive](image.png)
|
29 |
+
Complex:
|
30 |
+
![why need graph](assets/image.png)
|
31 |
+
|
32 |
+
|
33 |
+
- Docker for backend
|
34 |
+
|
35 |
+
- NLM-Ingestor - llmsherpa API - Chunk data
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
|
40 |
+
## FAQ
|
41 |
+
1. Is it necessary for a recursion more than 30 rounds? Is it spending money too much?
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
## Table of Contents
|
46 |
|
47 |
1. [Core Concepts](#core-concepts)
|
|
|
59 |
|
60 |
This project leverages four core concepts:
|
61 |
|
62 |
+
1. **Meta prompting**: For more information, refer to the paper on **Meta-Prompting** ([source](https://arxiv.black/pdf/2401.12954)). Read our notes on [Meta-Prompting Overview](Docs/Meta-Prompting%20Overview.MD) for a more concise overview.
|
63 |
+
2. **Chain of Reasoning**: For [Jar3d](#setup-for-jar3d), we also leverage an adaptation of [Chain-of-Reasoning](https://github.com/ProfSynapse/Synapse_CoR)
|
64 |
3. [Jar3d](#setup-for-jar3d) uses retrieval augmented generation, which isn't used within the [Basic Meta Agent](#setup-for-basic-meta-agent). Read our notes on [Overview of Agentic RAG](Docs/Overview%20of%20Agentic%20RAG.MD).
|
65 |
+
4. **Jar3d** can generate knowledge graphs from web-pages allowing it to produce more comprehensive outputs.
|
66 |
|
67 |
## Prerequisites
|
68 |
|
69 |
+
### Environment Setup
|
70 |
+
1. **Install Anaconda:**
|
71 |
+
Download Anaconda from [https://www.anaconda.com/](https://www.anaconda.com/).
|
72 |
+
|
73 |
+
2. **Create a Virtual Environment:**
|
74 |
```bash
|
75 |
+
conda create -n agent_env python=3.11 pip
|
76 |
+
```
|
77 |
+
|
78 |
+
3. **Activate the Virtual Environment:**
|
79 |
+
```bash
|
80 |
+
conda activate agent_env
|
81 |
```
|
82 |
|
83 |
+
## Repository Setup
|
84 |
+
1. **Clone the Repository:**
|
85 |
+
```bash
|
86 |
+
git clone https://github.com/JarvisChan666/SuperExpert
|
87 |
+
```
|
88 |
+
|
89 |
+
2. **Navigate to the Repository:**
|
90 |
+
```bash
|
91 |
+
cd /path/to/your-repo/meta_expert
|
92 |
+
```
|
93 |
+
|
94 |
+
3. **Install Requirements:**
|
95 |
+
```bash
|
96 |
+
pip install -r requirements.txt
|
97 |
+
```
|
98 |
+
|
99 |
+
4. You will need Docker and Docker Composed installed to get the project up and running:
|
100 |
- [Docker](https://www.docker.com/get-started)
|
101 |
- [Docker Compose](https://docs.docker.com/compose/install/)
|
102 |
|
103 |
+
5. **If you wish to use Hybrid Retrieval, you will need to create a Free Neo4j Aura Account:**
|
104 |
- [Neo4j Aura](https://neo4j.com/)
|
105 |
|
106 |
## Configuration
|
107 |
|
108 |
1. Navigate to the Repository:
|
109 |
```bash
|
110 |
+
cd /path/to/your-repo/SuperExpert
|
111 |
```
|
112 |
|
113 |
2. Open the `config.yaml` file:
|
agents/meta_agent.py
CHANGED
@@ -204,7 +204,8 @@ class ToolExpert(BaseAgent[State]):
|
|
204 |
|
205 |
def get_guided_json(self, state: State) -> Dict[str, Any]:
|
206 |
pass
|
207 |
-
|
|
|
208 |
def use_tool(self, mode: str, tool_input: str, doc_type: str = None) -> Any:
|
209 |
if mode == "serper":
|
210 |
results = serper_search(tool_input, self.location)
|
@@ -402,10 +403,9 @@ if __name__ == "__main__":
|
|
402 |
"server": "claude",
|
403 |
"temperature": 0.5
|
404 |
}
|
405 |
-
|
406 |
-
For OpenAI
|
407 |
agent_kwargs = {
|
408 |
-
"model": "gpt-4o",
|
409 |
"server": "openai",
|
410 |
"temperature": 0.1
|
411 |
}
|
@@ -473,7 +473,7 @@ if __name__ == "__main__":
|
|
473 |
break
|
474 |
|
475 |
# current_time = datetime.now()
|
476 |
-
recursion_limit =
|
477 |
state["recursion_limit"] = recursion_limit
|
478 |
state["user_input"] = query
|
479 |
limit = {"recursion_limit": recursion_limit}
|
|
|
204 |
|
205 |
def get_guided_json(self, state: State) -> Dict[str, Any]:
|
206 |
pass
|
207 |
+
|
208 |
+
# Use Serper to search
|
209 |
def use_tool(self, mode: str, tool_input: str, doc_type: str = None) -> Any:
|
210 |
if mode == "serper":
|
211 |
results = serper_search(tool_input, self.location)
|
|
|
403 |
"server": "claude",
|
404 |
"temperature": 0.5
|
405 |
}
|
406 |
+
|
|
|
407 |
agent_kwargs = {
|
408 |
+
"model": "gpt-4o-mini",
|
409 |
"server": "openai",
|
410 |
"temperature": 0.1
|
411 |
}
|
|
|
473 |
break
|
474 |
|
475 |
# current_time = datetime.now()
|
476 |
+
recursion_limit = 15
|
477 |
state["recursion_limit"] = recursion_limit
|
478 |
state["user_input"] = query
|
479 |
limit = {"recursion_limit": recursion_limit}
|