Spaces:
Sleeping
Sleeping
Big O
#2
by
oissa
- opened
- .gitattributes +0 -6
- README.md +36 -53
- app.py +12 -1
- data/ai_bookmarks_cache.json +334 -106
- data/ai_diagram.png +0 -3
- requirements.txt +8 -8
- src/agents/bookmarks_agent.py +5 -8
- src/agents/categoriser_agent.py +5 -8
- src/agents/gmail_agent.py +10 -7
- src/agents/manager_agent.py +5 -8
- src/agents/web_agents.py +5 -5
- src/interfaces/gradio_interface.py +84 -152
.gitattributes
CHANGED
@@ -33,9 +33,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
-
data/*.png filter=lfs diff=lfs merge=lfs -text
|
37 |
-
data/*.bmp filter=lfs diff=lfs merge=lfs -text
|
38 |
-
data/*.tiff filter=lfs diff=lfs merge=lfs -text
|
39 |
-
data/*.jpg filter=lfs diff=lfs merge=lfs -text
|
40 |
-
data/*.jpeg filter=lfs diff=lfs merge=lfs -text
|
41 |
-
data/*.gif filter=lfs diff=lfs merge=lfs -text
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
@@ -19,14 +19,6 @@ tags:
|
|
19 |
|
20 |
---
|
21 |
|
22 |
-
## π₯ Project Demo Video
|
23 |
-
|
24 |
-
[](https://youtu.be/CD0j2dGVycs)
|
25 |
-
|
26 |
-
**Watch the full project demonstration:** [https://youtu.be/CD0j2dGVycs](https://youtu.be/CD0j2dGVycs)
|
27 |
-
|
28 |
-
---
|
29 |
-
|
30 |
# π§ ReMind β Bring your past to mind
|
31 |
|
32 |
**ReMind is a unified digital-memory assistant that learns from your AI newsletters, curates your Chrome bookmarks, and lets you query both in natural languageβall orchestrated by a multi-agent system.**
|
@@ -44,19 +36,19 @@ Built for the **Agents & MCP Hackathon 2025 β Track 3: Agentic Demo Showcase**
|
|
44 |
| Newsletter overload every morning | One-click OAuth fetch, auto-tagging & summarisation |
|
45 |
| Thousand-tab bookmark graveyard | AI categorisation & semantic search |
|
46 |
| Context-switching between inbox, browser, search | Single chat interface powered by agents |
|
47 |
-
| Opaque AI answers | Real-time
|
48 |
|
49 |
---
|
50 |
|
51 |
## π Live Demo β 30-Second Flow
|
52 |
|
53 |
-
1. **Open the Space** and
|
54 |
-
2.
|
55 |
-
3.
|
56 |
-
4.
|
57 |
-
5. Watch the agents think
|
58 |
|
59 |
-
[βΆοΈ **
|
60 |
|
61 |
---
|
62 |
|
@@ -65,21 +57,21 @@ Built for the **Agents & MCP Hackathon 2025 β Track 3: Agentic Demo Showcase**
|
|
65 |
```
|
66 |
ReMind/
|
67 |
ββ config/ # default prompts, yaml configs
|
68 |
-
ββ data/ #
|
69 |
ββ src/ # package root (importable as `remind`)
|
70 |
-
β ββ agents/ # Manager, Gmail, Bookmark,
|
71 |
-
β ββ interfaces/ # Gradio
|
72 |
β ββ tools/ # Gmail API wrapper, bookmark parser, utils
|
73 |
β ββ __init__.py
|
74 |
β ββ main.py # entrypoint when imported as a module
|
75 |
-
ββ app.py # HF Spaces entrypoint
|
76 |
ββ .env.example # copy β .env and fill in secrets
|
77 |
ββ requirements.txt # runtime deps
|
78 |
ββ pyproject.toml # tooling & formatting
|
79 |
ββ README.md # you are here
|
80 |
```
|
81 |
|
82 |
-
The **rootβlevel `app.py`** is the single entrypoint required by Hugging Face Spaces. It
|
83 |
|
84 |
---
|
85 |
|
@@ -101,54 +93,45 @@ $ python app.py # opens http://localhost:7860
|
|
101 |
|
102 |
| Key | Description |
|
103 |
| ---------------------------------------------- | -------------------------------------------------------- |
|
104 |
-
| `HF_TOKEN` | Hugging Face token for
|
105 |
-
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | OAuth credentials from Google Cloud Console
|
106 |
-
| `GOOGLE_REFRESH_TOKEN` / `GOOGLE_ACCESS_TOKEN` | Generated by
|
107 |
|
108 |
-
> **Tip β Gmail setup**:
|
109 |
|
110 |
---
|
111 |
|
112 |
## π οΈ Key Components
|
113 |
|
114 |
-

|
115 |
-
*System architecture showing the multi-agent orchestration and data flow in ReMind*
|
116 |
-
|
117 |
* **Multi-Agent Orchestrator** β built with *SmolagentS*; assigns Gmail parsing, bookmarking, and RAG tasks to specialists.
|
118 |
-
* **LLM Stack** β
|
119 |
-
* **Vector Store** β in-memory
|
120 |
-
* **Gradio 5 UI** β three-tab layout
|
121 |
|
122 |
---
|
123 |
|
124 |
## π― Core Features (MVP)
|
125 |
|
126 |
-
1. **
|
127 |
-
β’
|
128 |
-
β’
|
129 |
-
|
130 |
-
|
131 |
-
β’
|
132 |
-
|
133 |
-
β’
|
134 |
-
|
135 |
-
|
136 |
-
β’
|
137 |
-
β’ Transparent agent reasoning with step-by-step thinking display
|
138 |
-
4. **Categories Dashboard**
|
139 |
-
β’ Visual organization of AI bookmarks by category
|
140 |
-
β’ Statistics and insights about your AI knowledge base
|
141 |
-
β’ Easy browsing and discovery of forgotten resources
|
142 |
|
143 |
---
|
144 |
|
145 |
## π£οΈ Roadmap (Post-Hackathon)
|
146 |
|
147 |
-
*
|
148 |
-
*
|
149 |
-
*
|
150 |
-
*
|
151 |
-
* Mobile-optimized interface for on-the-go access
|
152 |
|
153 |
Contribute ideas β [Discussions](https://huggingface.co/spaces/Agents-MCP-Hackathon/ReMind/discussions).
|
154 |
|
@@ -161,7 +144,7 @@ PRs welcome! During hack-week we operate on a **fast-merge** basis.
|
|
161 |
* README with live demo & video βοΈ
|
162 |
* HF Space public & reproducible βοΈ
|
163 |
|
164 |
-
If ReMind sparks joy, **βοΈ the repo &
|
165 |
|
166 |
---
|
167 |
|
@@ -173,7 +156,7 @@ If ReMind sparks joy, **βοΈ the repo & "Like" the Space** β community engag
|
|
173 |
| Omar Issa | [@omarissa24](https://github.com/omarissa24) | Contributor |
|
174 |
| Ziad Mazzawi | [@zmazz](https://github.com/zmazz) | Contributor |
|
175 |
|
176 |
-
Thanks to **Modal Labs**, **Hugging Face**, **
|
177 |
|
178 |
---
|
179 |
|
|
|
19 |
|
20 |
---
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
# π§ ReMind β Bring your past to mind
|
23 |
|
24 |
**ReMind is a unified digital-memory assistant that learns from your AI newsletters, curates your Chrome bookmarks, and lets you query both in natural languageβall orchestrated by a multi-agent system.**
|
|
|
36 |
| Newsletter overload every morning | One-click OAuth fetch, auto-tagging & summarisation |
|
37 |
| Thousand-tab bookmark graveyard | AI categorisation & semantic search |
|
38 |
| Context-switching between inbox, browser, search | Single chat interface powered by agents |
|
39 |
+
| Opaque AI answers | Real-time βthought processβ trace & clickable citations |
|
40 |
|
41 |
---
|
42 |
|
43 |
## π Live Demo β 30-Second Flow
|
44 |
|
45 |
+
1. **Open the Space** and switch to **Tab 1 β Connect**.
|
46 |
+
2. Click **βConnect Gmailβ**, grant read-only access, then hit **βIngestβ**.
|
47 |
+
3. Jump to **Tab 2 β Analytics** to see tag clouds and sender stats.
|
48 |
+
4. Ask questions in **Tab 3 β Ask ReMind** e.g. βSummarise recent Anthropic updates.β
|
49 |
+
5. Watch the agents think, then follow the citations back to the original email or bookmark.
|
50 |
|
51 |
+
[βΆοΈ **4-min video demo**](https://youtu.be/xxxxxxxx)
|
52 |
|
53 |
---
|
54 |
|
|
|
57 |
```
|
58 |
ReMind/
|
59 |
ββ config/ # default prompts, yaml configs
|
60 |
+
ββ data/ # sample inbox & bookmark JSON (redacted)
|
61 |
ββ src/ # package root (importable as `remind`)
|
62 |
+
β ββ agents/ # Manager, Gmail, Bookmark, RAG, etc.
|
63 |
+
β ββ interfaces/ # Gradio Blocks & helper UI widgets
|
64 |
β ββ tools/ # Gmail API wrapper, bookmark parser, utils
|
65 |
β ββ __init__.py
|
66 |
β ββ main.py # entrypoint when imported as a module
|
67 |
+
ββ app.py # HF Spaces entrypoint (imports src.main)
|
68 |
ββ .env.example # copy β .env and fill in secrets
|
69 |
ββ requirements.txt # runtime deps
|
70 |
ββ pyproject.toml # tooling & formatting
|
71 |
ββ README.md # you are here
|
72 |
```
|
73 |
|
74 |
+
The **rootβlevel `app.py`** is the single entrypoint required by Hugging Face Spaces. It bootstraps the Gradio UI by calling `remind.interfaces.build()` inside `src/`.
|
75 |
|
76 |
---
|
77 |
|
|
|
93 |
|
94 |
| Key | Description |
|
95 |
| ---------------------------------------------- | -------------------------------------------------------- |
|
96 |
+
| `HF_TOKEN` | Hugging Face token for optional Inference API calls |
|
97 |
+
| `GOOGLE_CLIENT_ID` / `GOOGLE_CLIENT_SECRET` | OAuth credentials from Google Cloud Console |
|
98 |
+
| `GOOGLE_REFRESH_TOKEN` / `GOOGLE_ACCESS_TOKEN` | Generated by `python scripts/setup_gmail_credentials.py` |
|
99 |
|
100 |
+
> **Tip β Gmail setup**: Run `python scripts/setup_gmail_credentials.py` (included in `src/tools`) once. It opens a browser window, you sign in, and it prints the refresh & access tokens for `.env`.
|
101 |
|
102 |
---
|
103 |
|
104 |
## π οΈ Key Components
|
105 |
|
|
|
|
|
|
|
106 |
* **Multi-Agent Orchestrator** β built with *SmolagentS*; assigns Gmail parsing, bookmarking, and RAG tasks to specialists.
|
107 |
+
* **LLM Stack** β OpenAI GPT-4o for reasoning, ADA-002 for embeddings (pluggable).
|
108 |
+
* **Vector Store** β in-memory FAISS during hackathon (MongoDB Atlas planned).
|
109 |
+
* **Gradio 5 UI** β three-tab layout + real-time agent βthoughtsβ accordion.
|
110 |
|
111 |
---
|
112 |
|
113 |
## π― Core Features (MVP)
|
114 |
|
115 |
+
1. **Gmail Ingestion (read-only)**
|
116 |
+
β’ Detect newsletters via `List-Unsubscribe` header
|
117 |
+
β’ Parse HTML β Markdown β clean text
|
118 |
+
2. **Bookmark Brain**
|
119 |
+
β’ Import `Bookmarks.json`
|
120 |
+
β’ Zero-shot topic tagging
|
121 |
+
3. **Natural-Language Q\&A**
|
122 |
+
β’ RAG over combined email+bookmark corpus
|
123 |
+
β’ Inline citations
|
124 |
+
4. **Analytics Dashboard**
|
125 |
+
β’ Tag cloud, sender histogram, engagement insights
|
|
|
|
|
|
|
|
|
|
|
126 |
|
127 |
---
|
128 |
|
129 |
## π£οΈ Roadmap (Post-Hackathon)
|
130 |
|
131 |
+
* Gmail push notifications for near real-time ingest
|
132 |
+
* Atlas Vector Search backend
|
133 |
+
* Shared team workspaces
|
134 |
+
* Mobile PWA wrapper
|
|
|
135 |
|
136 |
Contribute ideas β [Discussions](https://huggingface.co/spaces/Agents-MCP-Hackathon/ReMind/discussions).
|
137 |
|
|
|
144 |
* README with live demo & video βοΈ
|
145 |
* HF Space public & reproducible βοΈ
|
146 |
|
147 |
+
If ReMind sparks joy, **βοΈ the repo & βLikeβ the Space** β community engagement counts!
|
148 |
|
149 |
---
|
150 |
|
|
|
156 |
| Omar Issa | [@omarissa24](https://github.com/omarissa24) | Contributor |
|
157 |
| Ziad Mazzawi | [@zmazz](https://github.com/zmazz) | Contributor |
|
158 |
|
159 |
+
Thanks to **Modal Labs**, **Hugging Face**, **OpenAI**, **MistralAI**, and **Anthropic** for credits that power this demo.
|
160 |
|
161 |
---
|
162 |
|
app.py
CHANGED
@@ -8,8 +8,10 @@ import os
|
|
8 |
import sys
|
9 |
from dotenv import load_dotenv
|
10 |
|
|
|
11 |
load_dotenv()
|
12 |
|
|
|
13 |
print(f"Python version: {sys.version}")
|
14 |
|
15 |
|
@@ -17,10 +19,12 @@ def main():
|
|
17 |
"""Main application entry point for Hugging Face Spaces"""
|
18 |
print("π Starting ReMind Digital Assistant...")
|
19 |
|
|
|
20 |
hf_token = os.getenv("HF_TOKEN")
|
21 |
if not hf_token:
|
22 |
print("β οΈ Warning: HF_TOKEN not found. Some features may be limited.")
|
23 |
|
|
|
24 |
google_client_id = os.getenv("GOOGLE_CLIENT_ID")
|
25 |
google_client_secret = os.getenv("GOOGLE_CLIENT_SECRET")
|
26 |
google_refresh_token = os.getenv("GOOGLE_REFRESH_TOKEN")
|
@@ -33,17 +37,20 @@ def main():
|
|
33 |
print("β
Gmail OAuth credentials configured.")
|
34 |
|
35 |
try:
|
|
|
36 |
from src.interfaces.gradio_interface import demo
|
37 |
|
38 |
print("β
ReMind Digital Assistant ready!")
|
39 |
print("π€ Real-time AI thinking display enabled")
|
40 |
print("π§ Email β’ π Web Search β’ π Bookmarks")
|
41 |
|
|
|
42 |
return demo
|
43 |
|
44 |
except ImportError as e:
|
45 |
print(f"β Import error: {e}")
|
46 |
print("Please ensure all dependencies are installed.")
|
|
|
47 |
import traceback
|
48 |
|
49 |
traceback.print_exc()
|
@@ -51,16 +58,19 @@ def main():
|
|
51 |
|
52 |
except Exception as e:
|
53 |
print(f"β Error starting ReMind: {e}")
|
|
|
54 |
import traceback
|
55 |
|
56 |
traceback.print_exc()
|
57 |
raise
|
58 |
|
59 |
|
|
|
60 |
if __name__ == "__main__":
|
61 |
try:
|
62 |
demo = main()
|
63 |
-
|
|
|
64 |
except Exception as e:
|
65 |
print(f"β Critical error during launch: {e}")
|
66 |
import traceback
|
@@ -68,4 +78,5 @@ if __name__ == "__main__":
|
|
68 |
traceback.print_exc()
|
69 |
sys.exit(1)
|
70 |
else:
|
|
|
71 |
demo = main()
|
|
|
8 |
import sys
|
9 |
from dotenv import load_dotenv
|
10 |
|
11 |
+
# Load environment variables
|
12 |
load_dotenv()
|
13 |
|
14 |
+
# Add debug information for deployment
|
15 |
print(f"Python version: {sys.version}")
|
16 |
|
17 |
|
|
|
19 |
"""Main application entry point for Hugging Face Spaces"""
|
20 |
print("π Starting ReMind Digital Assistant...")
|
21 |
|
22 |
+
# Check for required environment variables
|
23 |
hf_token = os.getenv("HF_TOKEN")
|
24 |
if not hf_token:
|
25 |
print("β οΈ Warning: HF_TOKEN not found. Some features may be limited.")
|
26 |
|
27 |
+
# Check for Gmail OAuth credentials
|
28 |
google_client_id = os.getenv("GOOGLE_CLIENT_ID")
|
29 |
google_client_secret = os.getenv("GOOGLE_CLIENT_SECRET")
|
30 |
google_refresh_token = os.getenv("GOOGLE_REFRESH_TOKEN")
|
|
|
37 |
print("β
Gmail OAuth credentials configured.")
|
38 |
|
39 |
try:
|
40 |
+
# Import the Gradio interface
|
41 |
from src.interfaces.gradio_interface import demo
|
42 |
|
43 |
print("β
ReMind Digital Assistant ready!")
|
44 |
print("π€ Real-time AI thinking display enabled")
|
45 |
print("π§ Email β’ π Web Search β’ π Bookmarks")
|
46 |
|
47 |
+
# Return the demo for Hugging Face Spaces
|
48 |
return demo
|
49 |
|
50 |
except ImportError as e:
|
51 |
print(f"β Import error: {e}")
|
52 |
print("Please ensure all dependencies are installed.")
|
53 |
+
# Print more detailed error info for debugging
|
54 |
import traceback
|
55 |
|
56 |
traceback.print_exc()
|
|
|
58 |
|
59 |
except Exception as e:
|
60 |
print(f"β Error starting ReMind: {e}")
|
61 |
+
# Print more detailed error info for debugging
|
62 |
import traceback
|
63 |
|
64 |
traceback.print_exc()
|
65 |
raise
|
66 |
|
67 |
|
68 |
+
# For Hugging Face Spaces
|
69 |
if __name__ == "__main__":
|
70 |
try:
|
71 |
demo = main()
|
72 |
+
# Simple launch configuration for maximum compatibility
|
73 |
+
demo.launch()
|
74 |
except Exception as e:
|
75 |
print(f"β Critical error during launch: {e}")
|
76 |
import traceback
|
|
|
78 |
traceback.print_exc()
|
79 |
sys.exit(1)
|
80 |
else:
|
81 |
+
# When imported, return the demo
|
82 |
demo = main()
|
data/ai_bookmarks_cache.json
CHANGED
@@ -1,189 +1,417 @@
|
|
1 |
{
|
2 |
"bookmarks": [
|
3 |
{
|
4 |
-
"title": "
|
5 |
-
"url": "https://
|
6 |
-
"date_added": "
|
7 |
"date_modified": "",
|
8 |
-
"id": "
|
|
|
|
|
9 |
},
|
10 |
{
|
11 |
-
"title": "
|
12 |
-
"url": "https://
|
13 |
-
"date_added": "
|
14 |
"date_modified": "",
|
15 |
-
"id": "
|
|
|
|
|
16 |
},
|
17 |
{
|
18 |
-
"title": "
|
19 |
-
"url": "https://
|
20 |
-
"date_added": "
|
21 |
"date_modified": "",
|
22 |
-
"id": "
|
|
|
|
|
23 |
},
|
24 |
{
|
25 |
-
"title": "
|
26 |
-
"url": "https://www.
|
27 |
-
"date_added": "
|
28 |
"date_modified": "",
|
29 |
-
"id": "
|
|
|
|
|
30 |
},
|
31 |
{
|
32 |
-
"title": "
|
33 |
-
"url": "https://
|
34 |
-
"date_added": "
|
35 |
"date_modified": "",
|
36 |
-
"id": "
|
|
|
|
|
37 |
},
|
38 |
{
|
39 |
-
"title": "
|
40 |
-
"url": "https://
|
41 |
-
"date_added": "
|
42 |
"date_modified": "",
|
43 |
-
"id": "
|
|
|
|
|
44 |
},
|
45 |
{
|
46 |
-
"title": "
|
47 |
-
"url": "https://
|
48 |
-
"date_added": "
|
49 |
"date_modified": "",
|
50 |
-
"id": "
|
|
|
|
|
51 |
},
|
52 |
{
|
53 |
-
"title": "
|
54 |
-
"url": "https://
|
55 |
-
"date_added": "
|
56 |
"date_modified": "",
|
57 |
-
"id": "
|
|
|
|
|
58 |
},
|
59 |
{
|
60 |
-
"title": "
|
61 |
-
"url": "https://github.com/
|
62 |
-
"date_added": "
|
63 |
"date_modified": "",
|
64 |
-
"id": "
|
|
|
|
|
65 |
},
|
66 |
{
|
67 |
-
"title": "
|
68 |
-
"url": "https://
|
69 |
-
"date_added": "
|
70 |
"date_modified": "",
|
71 |
-
"id": "
|
|
|
|
|
72 |
},
|
73 |
{
|
74 |
-
"title": "
|
75 |
-
"url": "https://
|
76 |
-
"date_added": "
|
77 |
"date_modified": "",
|
78 |
-
"id": "
|
|
|
|
|
79 |
},
|
80 |
{
|
81 |
-
"title": "
|
82 |
-
"url": "https://
|
83 |
-
"date_added": "
|
84 |
"date_modified": "",
|
85 |
-
"id": "
|
|
|
|
|
86 |
},
|
87 |
{
|
88 |
-
"title": "
|
89 |
-
"url": "https://
|
90 |
-
"date_added": "
|
91 |
"date_modified": "",
|
92 |
-
"id": "
|
|
|
|
|
93 |
},
|
94 |
{
|
95 |
-
"title": "
|
96 |
-
"url": "https://
|
97 |
-
"date_added": "
|
98 |
"date_modified": "",
|
99 |
-
"id": "
|
|
|
|
|
100 |
},
|
101 |
{
|
102 |
-
"title": "
|
103 |
-
"url": "https://
|
104 |
-
"date_added": "
|
105 |
"date_modified": "",
|
106 |
-
"id": "
|
|
|
|
|
107 |
},
|
108 |
{
|
109 |
-
"title": "
|
110 |
-
"url": "https://www.
|
111 |
-
"date_added": "
|
112 |
"date_modified": "",
|
113 |
-
"id": "
|
|
|
|
|
114 |
},
|
115 |
{
|
116 |
-
"title": "
|
117 |
-
"url": "https://
|
118 |
-
"date_added": "
|
119 |
"date_modified": "",
|
120 |
-
"id": "
|
|
|
|
|
121 |
},
|
122 |
{
|
123 |
-
"title": "
|
124 |
-
"url": "https://
|
125 |
-
"date_added": "
|
126 |
"date_modified": "",
|
127 |
-
"id": "
|
|
|
|
|
128 |
},
|
129 |
{
|
130 |
-
"title": "
|
131 |
-
"url": "https://www.
|
132 |
-
"date_added": "
|
133 |
"date_modified": "",
|
134 |
-
"id": "
|
|
|
|
|
135 |
},
|
136 |
{
|
137 |
-
"title": "
|
138 |
-
"url": "https://
|
139 |
-
"date_added": "
|
140 |
"date_modified": "",
|
141 |
-
"id": "
|
|
|
|
|
142 |
},
|
143 |
{
|
144 |
-
"title": "
|
145 |
-
"url": "https://
|
146 |
-
"date_added": "
|
147 |
"date_modified": "",
|
148 |
-
"id": "
|
|
|
|
|
149 |
},
|
150 |
{
|
151 |
-
"title": "
|
152 |
-
"url": "https://
|
153 |
-
"date_added": "
|
154 |
"date_modified": "",
|
155 |
-
"id": "
|
|
|
|
|
156 |
},
|
157 |
{
|
158 |
-
"title": "
|
159 |
-
"url": "https://
|
160 |
-
"date_added": "
|
161 |
"date_modified": "",
|
162 |
-
"id": "
|
|
|
|
|
163 |
},
|
164 |
{
|
165 |
-
"title": "
|
166 |
-
"url": "https://
|
167 |
-
"date_added": "
|
168 |
"date_modified": "",
|
169 |
-
"id": "
|
|
|
|
|
170 |
},
|
171 |
{
|
172 |
-
"title": "
|
173 |
-
"url": "https://
|
174 |
-
"date_added": "
|
175 |
"date_modified": "",
|
176 |
-
"id": "
|
|
|
|
|
177 |
},
|
178 |
{
|
179 |
-
"title": "
|
180 |
-
"url": "https://
|
181 |
-
"date_added": "
|
182 |
"date_modified": "",
|
183 |
-
"id": "
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
}
|
185 |
],
|
186 |
-
"last_updated": "2025-06-
|
187 |
"folder_name": "AI ressources",
|
188 |
-
"total_count":
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
189 |
}
|
|
|
1 |
{
|
2 |
"bookmarks": [
|
3 |
{
|
4 |
+
"title": "Google Paul Picard",
|
5 |
+
"url": "https://docs.google.com/spreadsheets/d/e/2PACX-1vT0Id93M-6TDpgqtxHj0aZiq7_ukmwBcLHz9Iyzp-zUZaEqpeROBwiIVKgZ7QXS9e1-GgfngY3waieu/pubhtml?gid=292409471&single=true",
|
6 |
+
"date_added": "13328224891850904",
|
7 |
"date_modified": "",
|
8 |
+
"id": "32",
|
9 |
+
"category": "model_releases",
|
10 |
+
"category_name": "Model Releases & Updates"
|
11 |
},
|
12 |
{
|
13 |
+
"title": "lm-sys/FastChat: An open platform for training, serving, and evaluating large languages. Release repo for Vicuna and FastChat-T5.",
|
14 |
+
"url": "https://github.com/lm-sys/FastChat",
|
15 |
+
"date_added": "13327875742920347",
|
16 |
"date_modified": "",
|
17 |
+
"id": "33",
|
18 |
+
"category": "tools_frameworks",
|
19 |
+
"category_name": "Tools, Frameworks & Platforms"
|
20 |
},
|
21 |
{
|
22 |
+
"title": "ai_tools",
|
23 |
+
"url": "https://lmsys.org/blog/2023-03-30-vicuna/",
|
24 |
+
"date_added": "13327875714212791",
|
25 |
"date_modified": "",
|
26 |
+
"id": "34",
|
27 |
+
"category": "uncategorized",
|
28 |
+
"category_name": "Uncategorized"
|
29 |
},
|
30 |
{
|
31 |
+
"title": "Classement des images Β |Β TensorFlow Core",
|
32 |
+
"url": "https://www.tensorflow.org/tutorials/images/classification?hl=fr",
|
33 |
+
"date_added": "13326233575357207",
|
34 |
"date_modified": "",
|
35 |
+
"id": "35",
|
36 |
+
"category": "community_events",
|
37 |
+
"category_name": "Community, Events & Education"
|
38 |
},
|
39 |
{
|
40 |
+
"title": "GoogleΒ I/OΒ 2023",
|
41 |
+
"url": "https://io.google/2023/intl/fr/",
|
42 |
+
"date_added": "13328225028518287",
|
43 |
"date_modified": "",
|
44 |
+
"id": "36",
|
45 |
+
"category": "model_releases",
|
46 |
+
"category_name": "Model Releases & Updates"
|
47 |
},
|
48 |
{
|
49 |
+
"title": "Streamlit β’ A faster way to build and share data apps",
|
50 |
+
"url": "https://streamlit.io/",
|
51 |
+
"date_added": "13330650437583985",
|
52 |
"date_modified": "",
|
53 |
+
"id": "37",
|
54 |
+
"category": "uncategorized",
|
55 |
+
"category_name": "Uncategorized"
|
56 |
},
|
57 |
{
|
58 |
+
"title": "AI Demos - Discover Latest AI Tools with Video Demos",
|
59 |
+
"url": "https://www.aidemos.com/",
|
60 |
+
"date_added": "13330824848974471",
|
61 |
"date_modified": "",
|
62 |
+
"id": "38",
|
63 |
+
"category": "benchmarks_leaderboards",
|
64 |
+
"category_name": "Benchmarks & Leaderboards"
|
65 |
},
|
66 |
{
|
67 |
+
"title": "PrivateGPT - Running \"ChatGPT\" offline on local documents - DEV Community",
|
68 |
+
"url": "https://dev.to/codepo8/privategpt-running-chatgpt-offline-on-local-documents-5b17",
|
69 |
+
"date_added": "13330859800928018",
|
70 |
"date_modified": "",
|
71 |
+
"id": "39",
|
72 |
+
"category": "model_releases",
|
73 |
+
"category_name": "Model Releases & Updates"
|
74 |
},
|
75 |
{
|
76 |
+
"title": "imartinez/privateGPT: Interact privately with your documents using the power of GPT, 100% privately, no data leaks",
|
77 |
+
"url": "https://github.com/imartinez/privateGPT",
|
78 |
+
"date_added": "13330859943995663",
|
79 |
"date_modified": "",
|
80 |
+
"id": "40",
|
81 |
+
"category": "model_releases",
|
82 |
+
"category_name": "Model Releases & Updates"
|
83 |
},
|
84 |
{
|
85 |
+
"title": "π Introducing β¨ Bose Framework - The Swiss Army Knife for Bot Developers π€ - DEV Community",
|
86 |
+
"url": "https://dev.to/chetanam/introducing-bose-framework-the-swiss-army-knife-for-bot-developers-10k9",
|
87 |
+
"date_added": "13330859999083707",
|
88 |
"date_modified": "",
|
89 |
+
"id": "41",
|
90 |
+
"category": "tools_frameworks",
|
91 |
+
"category_name": "Tools, Frameworks & Platforms"
|
92 |
},
|
93 |
{
|
94 |
+
"title": "Making LLMs lighter with AutoGPTQ and transformers",
|
95 |
+
"url": "https://huggingface.co/blog/gptq-integration",
|
96 |
+
"date_added": "13337978791318677",
|
97 |
"date_modified": "",
|
98 |
+
"id": "42",
|
99 |
+
"category": "model_releases",
|
100 |
+
"category_name": "Model Releases & Updates"
|
101 |
},
|
102 |
{
|
103 |
+
"title": "AutoGPTQ-transformers.ipynb - Colaboratory",
|
104 |
+
"url": "https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing",
|
105 |
+
"date_added": "13337978800270019",
|
106 |
"date_modified": "",
|
107 |
+
"id": "43",
|
108 |
+
"category": "model_releases",
|
109 |
+
"category_name": "Model Releases & Updates"
|
110 |
},
|
111 |
{
|
112 |
+
"title": "π Awesome Badges - DEV Community",
|
113 |
+
"url": "https://dev.to/envoy_/150-badges-for-github-pnk",
|
114 |
+
"date_added": "13338039236944519",
|
115 |
"date_modified": "",
|
116 |
+
"id": "44",
|
117 |
+
"category": "community_events",
|
118 |
+
"category_name": "Community, Events & Education"
|
119 |
},
|
120 |
{
|
121 |
+
"title": "GitHub Profile Readme Generator | GitHub Profile Readme Generator",
|
122 |
+
"url": "https://rahuldkjain.github.io/gh-profile-readme-generator/",
|
123 |
+
"date_added": "13338039247239177",
|
124 |
"date_modified": "",
|
125 |
+
"id": "45",
|
126 |
+
"category": "uncategorized",
|
127 |
+
"category_name": "Uncategorized"
|
128 |
},
|
129 |
{
|
130 |
+
"title": "Celian Raimbault",
|
131 |
+
"url": "https://portfolio.celian.dev/",
|
132 |
+
"date_added": "13338117115620269",
|
133 |
"date_modified": "",
|
134 |
+
"id": "46",
|
135 |
+
"category": "uncategorized",
|
136 |
+
"category_name": "Uncategorized"
|
137 |
},
|
138 |
{
|
139 |
+
"title": "DataPen | Free online resources for data scientist and analyst",
|
140 |
+
"url": "https://www.datapen.io/",
|
141 |
+
"date_added": "13338557687104615",
|
142 |
"date_modified": "",
|
143 |
+
"id": "47",
|
144 |
+
"category": "market_trends",
|
145 |
+
"category_name": "Market Trends & Analysis"
|
146 |
},
|
147 |
{
|
148 |
+
"title": "Jour 0: Cours",
|
149 |
+
"url": "https://cours.cocadmin.com/view/courses/formation-aws-gratuite/2130055-jour-0-iam-et-cli/6712605-jour-0-cours",
|
150 |
+
"date_added": "13338557712719184",
|
151 |
"date_modified": "",
|
152 |
+
"id": "48",
|
153 |
+
"category": "tools_frameworks",
|
154 |
+
"category_name": "Tools, Frameworks & Platforms"
|
155 |
},
|
156 |
{
|
157 |
+
"title": "Free Online PDF Tools - TinyWow",
|
158 |
+
"url": "https://tinywow.com/tools/pdf",
|
159 |
+
"date_added": "13338931697845265",
|
160 |
"date_modified": "",
|
161 |
+
"id": "49",
|
162 |
+
"category": "uncategorized",
|
163 |
+
"category_name": "Uncategorized"
|
164 |
},
|
165 |
{
|
166 |
+
"title": "Top 45 Machine Learning Interview Questions (2023) | Simplilearn",
|
167 |
+
"url": "https://www.simplilearn.com/tutorials/machine-learning-tutorial/machine-learning-interview-questions",
|
168 |
+
"date_added": "13338981165101756",
|
169 |
"date_modified": "",
|
170 |
+
"id": "50",
|
171 |
+
"category": "community_events",
|
172 |
+
"category_name": "Community, Events & Education"
|
173 |
},
|
174 |
{
|
175 |
+
"title": "20 Must-Know Topics In Deep Learning For Beginners | Medium",
|
176 |
+
"url": "https://medium.com/@aspershupadhyay/mastering-deep-learning-20-key-concepts-explained-ea405aa6603d",
|
177 |
+
"date_added": "13344077670993478",
|
178 |
"date_modified": "",
|
179 |
+
"id": "51",
|
180 |
+
"category": "community_events",
|
181 |
+
"category_name": "Community, Events & Education"
|
182 |
},
|
183 |
{
|
184 |
+
"title": "notebook6a50ab3981 | Kaggle",
|
185 |
+
"url": "https://www.kaggle.com/code/habibadoum/notebook6a50ab3981/edit",
|
186 |
+
"date_added": "13345048952028071",
|
187 |
"date_modified": "",
|
188 |
+
"id": "52",
|
189 |
+
"category": "uncategorized",
|
190 |
+
"category_name": "Uncategorized"
|
191 |
},
|
192 |
{
|
193 |
+
"title": "Open LLM Leaderboard - a Hugging Face Space by HuggingFaceH4",
|
194 |
+
"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard",
|
195 |
+
"date_added": "13346410937933676",
|
196 |
"date_modified": "",
|
197 |
+
"id": "53",
|
198 |
+
"category": "model_releases",
|
199 |
+
"category_name": "Model Releases & Updates"
|
200 |
},
|
201 |
{
|
202 |
+
"title": "How to fine-tune Google Gemma with ChatML and Hugging Face TRL",
|
203 |
+
"url": "https://www.philschmid.de/fine-tune-google-gemma",
|
204 |
+
"date_added": "13355266820152491",
|
205 |
"date_modified": "",
|
206 |
+
"id": "54",
|
207 |
+
"category": "model_releases",
|
208 |
+
"category_name": "Model Releases & Updates"
|
209 |
},
|
210 |
{
|
211 |
+
"title": "Parsio | Extract Data From Emails and Documents",
|
212 |
+
"url": "https://parsio.io/",
|
213 |
+
"date_added": "13360067789467033",
|
214 |
"date_modified": "",
|
215 |
+
"id": "55",
|
216 |
+
"category": "uncategorized",
|
217 |
+
"category_name": "Uncategorized"
|
218 |
},
|
219 |
{
|
220 |
+
"title": "How to export pptx to image (png, jpeg) in Python? - Stack Overflow",
|
221 |
+
"url": "https://stackoverflow.com/questions/61815883/how-to-export-pptx-to-image-png-jpeg-in-python",
|
222 |
+
"date_added": "13361144199629412",
|
223 |
"date_modified": "",
|
224 |
+
"id": "56",
|
225 |
+
"category": "uncategorized",
|
226 |
+
"category_name": "Uncategorized"
|
227 |
},
|
228 |
{
|
229 |
+
"title": "I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models",
|
230 |
+
"url": "https://i2vedit.github.io/index.html",
|
231 |
+
"date_added": "13361459321806042",
|
232 |
"date_modified": "",
|
233 |
+
"id": "57",
|
234 |
+
"category": "model_releases",
|
235 |
+
"category_name": "Model Releases & Updates"
|
236 |
+
},
|
237 |
+
{
|
238 |
+
"title": "MT4 Traders rating",
|
239 |
+
"url": "https://my.roboforex.com/en/copyfx/providers/show/326561/?pg=4&grid_token=pamm_providers_trades_closed#pamm_providers_trades_closed",
|
240 |
+
"date_added": "13361912358287325",
|
241 |
+
"date_modified": "",
|
242 |
+
"id": "58",
|
243 |
+
"category": "uncategorized",
|
244 |
+
"category_name": "Uncategorized"
|
245 |
+
},
|
246 |
+
{
|
247 |
+
"title": "APiCS Online - Survey chapter: Sango",
|
248 |
+
"url": "https://apics-online.info/surveys/59",
|
249 |
+
"date_added": "13362494909928555",
|
250 |
+
"date_modified": "",
|
251 |
+
"id": "59",
|
252 |
+
"category": "tools_frameworks",
|
253 |
+
"category_name": "Tools, Frameworks & Platforms"
|
254 |
+
},
|
255 |
+
{
|
256 |
+
"title": "OLAC resources in and about the Sango language",
|
257 |
+
"url": "http://www.language-archives.org/language/sag",
|
258 |
+
"date_added": "13362494915646492",
|
259 |
+
"date_modified": "",
|
260 |
+
"id": "60",
|
261 |
+
"category": "uncategorized",
|
262 |
+
"category_name": "Uncategorized"
|
263 |
+
},
|
264 |
+
{
|
265 |
+
"title": "Language Data | SIL International",
|
266 |
+
"url": "https://www.sil.org/linguistics/language-data",
|
267 |
+
"date_added": "13362494955881924",
|
268 |
+
"date_modified": "",
|
269 |
+
"id": "61",
|
270 |
+
"category": "uncategorized",
|
271 |
+
"category_name": "Uncategorized"
|
272 |
+
},
|
273 |
+
{
|
274 |
+
"title": "27 Best Free Human Annotated Datasets for Machine Learning",
|
275 |
+
"url": "https://www.iguazio.com/blog/best-free-human-annotated-datasets-for-ml/",
|
276 |
+
"date_added": "13362570265375490",
|
277 |
+
"date_modified": "",
|
278 |
+
"id": "62",
|
279 |
+
"category": "benchmarks_leaderboards",
|
280 |
+
"category_name": "Benchmarks & Leaderboards"
|
281 |
+
},
|
282 |
+
{
|
283 |
+
"title": "RLHFlow/ArmoRM-Llama3-8B-v0.1 Β· Hugging Face",
|
284 |
+
"url": "https://huggingface.co/RLHFlow/ArmoRM-Llama3-8B-v0.1",
|
285 |
+
"date_added": "13362665423352229",
|
286 |
+
"date_modified": "",
|
287 |
+
"id": "63",
|
288 |
+
"category": "model_releases",
|
289 |
+
"category_name": "Model Releases & Updates"
|
290 |
+
},
|
291 |
+
{
|
292 |
+
"title": "OpenAI Platform",
|
293 |
+
"url": "https://platform.openai.com/tokenizer",
|
294 |
+
"date_added": "13363193528079875",
|
295 |
+
"date_modified": "",
|
296 |
+
"id": "64",
|
297 |
+
"category": "model_releases",
|
298 |
+
"category_name": "Model Releases & Updates"
|
299 |
+
},
|
300 |
+
{
|
301 |
+
"title": "#397 - Yann Le Cun - Chief AI Scientist chez Meta - L'Intelligence Artificielle GΓ©nΓ©rale ne viend... - YouTube",
|
302 |
+
"url": "https://www.youtube.com/watch?v=8CNgaLOdfzU&list=PLWT7hkKacBMKEQlMyOMt6qSzKHTGguP9I&index=14&ab_channel=MatthieuStefani",
|
303 |
+
"date_added": "13363990777765160",
|
304 |
+
"date_modified": "",
|
305 |
+
"id": "65",
|
306 |
+
"category": "model_releases",
|
307 |
+
"category_name": "Model Releases & Updates"
|
308 |
+
},
|
309 |
+
{
|
310 |
+
"title": "willccbb/mlx_parallm: Fast parallel LLM inference for MLX",
|
311 |
+
"url": "https://github.com/willccbb/mlx_parallm",
|
312 |
+
"date_added": "13364301917826829",
|
313 |
+
"date_modified": "",
|
314 |
+
"id": "66",
|
315 |
+
"category": "model_releases",
|
316 |
+
"category_name": "Model Releases & Updates"
|
317 |
+
},
|
318 |
+
{
|
319 |
+
"title": "ParaLLM: 1600+ tok/s on a MacBook - William Brown",
|
320 |
+
"url": "https://willcb.com/blog/parallm/",
|
321 |
+
"date_added": "13364301928658934",
|
322 |
+
"date_modified": "",
|
323 |
+
"id": "67",
|
324 |
+
"category": "model_releases",
|
325 |
+
"category_name": "Model Releases & Updates"
|
326 |
+
},
|
327 |
+
{
|
328 |
+
"title": "Learn RAG with Langchain π¦βοΈβπ₯ β Ragatouille",
|
329 |
+
"url": "https://www.sakunaharinda.xyz/ragatouille-book/intro.html",
|
330 |
+
"date_added": "13364432382297750",
|
331 |
+
"date_modified": "",
|
332 |
+
"id": "68",
|
333 |
+
"category": "uncategorized",
|
334 |
+
"category_name": "Uncategorized"
|
335 |
+
},
|
336 |
+
{
|
337 |
+
"title": "Read the newest State of AI report | Retool Blog | Cache",
|
338 |
+
"url": "https://retool.com/blog/reports/state-of-ai-h1-2024?utm_source=alphasignal&utm_medium=newsletter&utm_campaign=as_main_ad",
|
339 |
+
"date_added": "13364457119344625",
|
340 |
+
"date_modified": "",
|
341 |
+
"id": "69",
|
342 |
+
"category": "market_trends",
|
343 |
+
"category_name": "Market Trends & Analysis"
|
344 |
+
},
|
345 |
+
{
|
346 |
+
"title": "proj-persona/PersonaHub Β· Datasets at Hugging Face",
|
347 |
+
"url": "https://huggingface.co/datasets/proj-persona/PersonaHub",
|
348 |
+
"date_added": "13364457150691623",
|
349 |
+
"date_modified": "",
|
350 |
+
"id": "70",
|
351 |
+
"category": "benchmarks_leaderboards",
|
352 |
+
"category_name": "Benchmarks & Leaderboards"
|
353 |
+
},
|
354 |
+
{
|
355 |
+
"title": "Demystifying PDF Parsing 04: OCR-Free Large Multimodal Model-Based Method | by Florian June | Jul, 2024 | AI Advances",
|
356 |
+
"url": "https://ai.gopubby.com/demystifying-pdf-parsing-04-ocr-free-large-multimodal-model-based-method-0fdab50db048",
|
357 |
+
"date_added": "13364457177701861",
|
358 |
+
"date_modified": "",
|
359 |
+
"id": "71",
|
360 |
+
"category": "model_releases",
|
361 |
+
"category_name": "Model Releases & Updates"
|
362 |
+
},
|
363 |
+
{
|
364 |
+
"title": "Single Neuron (easy) - Deep-ML",
|
365 |
+
"url": "https://www.deep-ml.com/problem/Single%20Neuron",
|
366 |
+
"date_added": "13373457508348775",
|
367 |
+
"date_modified": "",
|
368 |
+
"id": "72",
|
369 |
+
"category": "uncategorized",
|
370 |
+
"category_name": "Uncategorized"
|
371 |
+
},
|
372 |
+
{
|
373 |
+
"title": "React Terminal Component",
|
374 |
+
"url": "https://primereact.org/terminal/",
|
375 |
+
"date_added": "13381418171099712",
|
376 |
+
"date_modified": "",
|
377 |
+
"id": "364",
|
378 |
+
"category": "uncategorized",
|
379 |
+
"category_name": "Uncategorized"
|
380 |
+
},
|
381 |
+
{
|
382 |
+
"title": "The 7 best AI image generators in 2025 | Zapier",
|
383 |
+
"url": "https://zapier.com/blog/best-ai-image-generator/?utm_campaign=yt-gbl-nua-evr-Kevin_Stratvert_121624_Third_Party_Channel-vid&utm_medium=social&utm_source=youtube",
|
384 |
+
"date_added": "13385992373817306",
|
385 |
+
"date_modified": "",
|
386 |
+
"id": "366",
|
387 |
+
"category": "tools_frameworks",
|
388 |
+
"category_name": "Tools, Frameworks & Platforms"
|
389 |
+
},
|
390 |
+
{
|
391 |
+
"title": "Satvik Shrivastava",
|
392 |
+
"url": "https://www.satvikshrivastava.tech/#top",
|
393 |
+
"date_added": "13393074779680121",
|
394 |
+
"date_modified": "",
|
395 |
+
"id": "370",
|
396 |
+
"category": "uncategorized",
|
397 |
+
"category_name": "Uncategorized"
|
398 |
}
|
399 |
],
|
400 |
+
"last_updated": "2025-06-09T00:16:13.739024",
|
401 |
"folder_name": "AI ressources",
|
402 |
+
"total_count": 44,
|
403 |
+
"last_categorized": "2025-06-09T00:19:15.060481",
|
404 |
+
"categorization_stats": {
|
405 |
+
"research_breakthroughs": 0,
|
406 |
+
"model_releases": 15,
|
407 |
+
"tools_frameworks": 5,
|
408 |
+
"applications_industry": 0,
|
409 |
+
"regulation_ethics": 0,
|
410 |
+
"investment_funding": 0,
|
411 |
+
"benchmarks_leaderboards": 3,
|
412 |
+
"community_events": 4,
|
413 |
+
"security_privacy": 0,
|
414 |
+
"market_trends": 2,
|
415 |
+
"uncategorized": 15
|
416 |
+
}
|
417 |
}
|
data/ai_diagram.png
DELETED
Git LFS Details
|
requirements.txt
CHANGED
@@ -1,9 +1,9 @@
|
|
1 |
-
gradio
|
2 |
-
google-api-python-client
|
3 |
-
google-auth-httplib2
|
4 |
-
google-auth-oauthlib
|
5 |
-
python-dotenv
|
6 |
-
smolagents
|
7 |
smolagents[openai]
|
8 |
-
openai
|
9 |
-
requests
|
|
|
1 |
+
gradio>=4.0.0
|
2 |
+
google-api-python-client>=2.0.0
|
3 |
+
google-auth-httplib2>=0.1.0
|
4 |
+
google-auth-oauthlib>=1.0.0
|
5 |
+
python-dotenv>=1.0.0
|
6 |
+
smolagents>=0.3.0
|
7 |
smolagents[openai]
|
8 |
+
openai>=1.0.0
|
9 |
+
requests>=2.25.0
|
src/agents/bookmarks_agent.py
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
from typing import List, Dict, Any
|
2 |
-
from smolagents import CodeAgent,
|
3 |
import os
|
4 |
import json
|
5 |
import sys
|
@@ -328,9 +328,9 @@ def get_cache_info() -> Dict[str, Any]:
|
|
328 |
|
329 |
# Instantiate the Bookmarks CodeAgent with enhanced tools
|
330 |
bookmarks_agent = CodeAgent(
|
331 |
-
model=
|
332 |
-
|
333 |
-
|
334 |
),
|
335 |
tools=[
|
336 |
update_ai_bookmarks_cache,
|
@@ -343,9 +343,6 @@ bookmarks_agent = CodeAgent(
|
|
343 |
],
|
344 |
name="bookmarks_agent",
|
345 |
description="Specialized agent for Chrome bookmarks operations, focusing on AI ressources folder. Extracts bookmarks from Chrome and caches them in data/ai_bookmarks_cache.json to avoid direct interaction with Chrome's raw JSON. Provides search, filtering, statistics, and cache management for AI-related bookmarks.",
|
346 |
-
max_steps=
|
347 |
additional_authorized_imports=["json", "datetime", "urllib.parse", "pathlib"],
|
348 |
-
# Reduce verbosity
|
349 |
-
stream_outputs=False,
|
350 |
-
max_print_outputs_length=300,
|
351 |
)
|
|
|
1 |
from typing import List, Dict, Any
|
2 |
+
from smolagents import CodeAgent, OpenAIServerModel, tool
|
3 |
import os
|
4 |
import json
|
5 |
import sys
|
|
|
328 |
|
329 |
# Instantiate the Bookmarks CodeAgent with enhanced tools
|
330 |
bookmarks_agent = CodeAgent(
|
331 |
+
model=OpenAIServerModel(
|
332 |
+
model_id="o4-mini-2025-04-16",
|
333 |
+
api_key=os.environ["OPENAI_API_KEY"],
|
334 |
),
|
335 |
tools=[
|
336 |
update_ai_bookmarks_cache,
|
|
|
343 |
],
|
344 |
name="bookmarks_agent",
|
345 |
description="Specialized agent for Chrome bookmarks operations, focusing on AI ressources folder. Extracts bookmarks from Chrome and caches them in data/ai_bookmarks_cache.json to avoid direct interaction with Chrome's raw JSON. Provides search, filtering, statistics, and cache management for AI-related bookmarks.",
|
346 |
+
max_steps=5,
|
347 |
additional_authorized_imports=["json", "datetime", "urllib.parse", "pathlib"],
|
|
|
|
|
|
|
348 |
)
|
src/agents/categoriser_agent.py
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
from typing import List, Dict, Any
|
2 |
-
from smolagents import CodeAgent,
|
3 |
import os
|
4 |
import json
|
5 |
from pathlib import Path
|
@@ -524,9 +524,9 @@ def search_bookmarks_by_category_and_query(category: str, query: str) -> List[Di
|
|
524 |
|
525 |
# Instantiate the Categoriser CodeAgent
|
526 |
categoriser_agent = CodeAgent(
|
527 |
-
model=
|
528 |
-
|
529 |
-
|
530 |
),
|
531 |
tools=[
|
532 |
categorize_all_bookmarks,
|
@@ -538,9 +538,6 @@ categoriser_agent = CodeAgent(
|
|
538 |
],
|
539 |
name="categoriser_agent",
|
540 |
description="Specializes in categorizing AI news and bookmarks into 10 predefined categories: Research & Breakthroughs, Model Releases & Updates, Tools/Frameworks/Platforms, Applications & Industry Use Cases, Regulation/Ethics/Policy, Investment/Funding/M&A, Benchmarks & Leaderboards, Community/Events/Education, Security/Privacy/Safety, and Market Trends & Analysis. Uses keyword-based categorization and provides tools for managing and searching categorized content.",
|
541 |
-
max_steps=
|
542 |
additional_authorized_imports=["json", "datetime", "re", "pathlib"],
|
543 |
-
# Reduce verbosity
|
544 |
-
stream_outputs=False,
|
545 |
-
max_print_outputs_length=300,
|
546 |
)
|
|
|
1 |
from typing import List, Dict, Any
|
2 |
+
from smolagents import CodeAgent, OpenAIServerModel, tool
|
3 |
import os
|
4 |
import json
|
5 |
from pathlib import Path
|
|
|
524 |
|
525 |
# Instantiate the Categoriser CodeAgent
|
526 |
categoriser_agent = CodeAgent(
|
527 |
+
model=OpenAIServerModel(
|
528 |
+
model_id="o4-mini-2025-04-16",
|
529 |
+
api_key=os.environ["OPENAI_API_KEY"],
|
530 |
),
|
531 |
tools=[
|
532 |
categorize_all_bookmarks,
|
|
|
538 |
],
|
539 |
name="categoriser_agent",
|
540 |
description="Specializes in categorizing AI news and bookmarks into 10 predefined categories: Research & Breakthroughs, Model Releases & Updates, Tools/Frameworks/Platforms, Applications & Industry Use Cases, Regulation/Ethics/Policy, Investment/Funding/M&A, Benchmarks & Leaderboards, Community/Events/Education, Security/Privacy/Safety, and Market Trends & Analysis. Uses keyword-based categorization and provides tools for managing and searching categorized content.",
|
541 |
+
max_steps=5,
|
542 |
additional_authorized_imports=["json", "datetime", "re", "pathlib"],
|
|
|
|
|
|
|
543 |
)
|
src/agents/gmail_agent.py
CHANGED
@@ -1,6 +1,8 @@
|
|
|
|
|
|
1 |
import os
|
2 |
from typing import List, Dict, Any
|
3 |
-
from smolagents import CodeAgent,
|
4 |
|
5 |
from src.tools.gmail_mcp_client import get_recent_emails as _get_recent_emails
|
6 |
from src.tools.gmail_mcp_client import search_emails_simple as _search_emails_simple
|
@@ -20,6 +22,7 @@ def get_recent_emails(max_results: int = 10) -> List[Dict[str, str]]:
|
|
20 |
List of email dictionaries with 'id', 'subject', 'sender', 'date', and 'snippet' fields.
|
21 |
Returns empty list if no emails found. Each email can be read in detail using read_email_content.
|
22 |
"""
|
|
|
23 |
if max_results < 1:
|
24 |
max_results = 1
|
25 |
elif max_results > 50:
|
@@ -42,6 +45,7 @@ def search_emails(query: str, max_results: int = 10) -> List[Dict[str, str]]:
|
|
42 |
List of email dictionaries with 'id', 'subject', 'sender', 'date', and 'snippet' fields.
|
43 |
Returns empty list if no matching emails found. Use read_email_content to get full email text.
|
44 |
"""
|
|
|
45 |
if not query or not query.strip():
|
46 |
print("Error: Empty search query provided")
|
47 |
return [{"error": "Search query cannot be empty"}]
|
@@ -74,16 +78,15 @@ def read_email_content(message_id: str) -> Dict[str, Any]:
|
|
74 |
return _read_email_content(message_id.strip())
|
75 |
|
76 |
|
|
|
77 |
gmail_agent = CodeAgent(
|
78 |
-
model=
|
79 |
-
|
80 |
-
|
81 |
),
|
82 |
tools=[get_recent_emails, search_emails, read_email_content],
|
83 |
name="gmail_agent",
|
84 |
description="Gmail agent specialized in reading and searching emails from habib.adoum01@gmail.com and news@alphasignal.ai only",
|
85 |
-
max_steps=10
|
86 |
additional_authorized_imports=["json"],
|
87 |
-
stream_outputs=False,
|
88 |
-
max_print_outputs_length=300,
|
89 |
)
|
|
|
1 |
+
# src/agents/gmail_agent.py
|
2 |
+
|
3 |
import os
|
4 |
from typing import List, Dict, Any
|
5 |
+
from smolagents import CodeAgent, OpenAIServerModel, tool
|
6 |
|
7 |
from src.tools.gmail_mcp_client import get_recent_emails as _get_recent_emails
|
8 |
from src.tools.gmail_mcp_client import search_emails_simple as _search_emails_simple
|
|
|
22 |
List of email dictionaries with 'id', 'subject', 'sender', 'date', and 'snippet' fields.
|
23 |
Returns empty list if no emails found. Each email can be read in detail using read_email_content.
|
24 |
"""
|
25 |
+
# Validate input
|
26 |
if max_results < 1:
|
27 |
max_results = 1
|
28 |
elif max_results > 50:
|
|
|
45 |
List of email dictionaries with 'id', 'subject', 'sender', 'date', and 'snippet' fields.
|
46 |
Returns empty list if no matching emails found. Use read_email_content to get full email text.
|
47 |
"""
|
48 |
+
# Validate input
|
49 |
if not query or not query.strip():
|
50 |
print("Error: Empty search query provided")
|
51 |
return [{"error": "Search query cannot be empty"}]
|
|
|
78 |
return _read_email_content(message_id.strip())
|
79 |
|
80 |
|
81 |
+
# Create a simplified Gmail agent focused on reading emails only
|
82 |
gmail_agent = CodeAgent(
|
83 |
+
model=OpenAIServerModel(
|
84 |
+
model_id="o4-mini-2025-04-16",
|
85 |
+
api_key=os.environ["OPENAI_API_KEY"],
|
86 |
),
|
87 |
tools=[get_recent_emails, search_emails, read_email_content],
|
88 |
name="gmail_agent",
|
89 |
description="Gmail agent specialized in reading and searching emails from habib.adoum01@gmail.com and news@alphasignal.ai only",
|
90 |
+
max_steps=6, # Reduced from 10 to prevent token overflow
|
91 |
additional_authorized_imports=["json"],
|
|
|
|
|
92 |
)
|
src/agents/manager_agent.py
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
# src/agents/manager_agent.py
|
2 |
|
3 |
-
from smolagents import CodeAgent,
|
4 |
from dotenv import load_dotenv
|
5 |
import os
|
6 |
|
@@ -15,9 +15,9 @@ load_dotenv()
|
|
15 |
# Create a single focused agent instead of complex multi-agent system
|
16 |
# This follows the smolagents principle: "The best agentic systems are the simplest"
|
17 |
manager_agent = CodeAgent(
|
18 |
-
model=
|
19 |
-
|
20 |
-
|
21 |
),
|
22 |
managed_agents=[web_agent, gmail_agent, bookmarks_agent, categoriser_agent],
|
23 |
name="digital_assistant",
|
@@ -43,11 +43,8 @@ manager_agent = CodeAgent(
|
|
43 |
"β’ Manually recategorize bookmarks when needed\n\n"
|
44 |
"I combine these capabilities to help you with research, information gathering, and digital organization tasks."
|
45 |
),
|
46 |
-
max_steps=
|
47 |
additional_authorized_imports=["json"],
|
48 |
# Add planning to help with complex queries
|
49 |
planning_interval=3, # Plan every 3 steps to maintain focus
|
50 |
-
# Reduce verbosity - disable streaming outputs and minimize console display
|
51 |
-
stream_outputs=False, # Disable live streaming of thoughts to terminal
|
52 |
-
max_print_outputs_length=500, # Limit output length to reduce terminal noise
|
53 |
)
|
|
|
1 |
# src/agents/manager_agent.py
|
2 |
|
3 |
+
from smolagents import CodeAgent, OpenAIServerModel
|
4 |
from dotenv import load_dotenv
|
5 |
import os
|
6 |
|
|
|
15 |
# Create a single focused agent instead of complex multi-agent system
|
16 |
# This follows the smolagents principle: "The best agentic systems are the simplest"
|
17 |
manager_agent = CodeAgent(
|
18 |
+
model=OpenAIServerModel(
|
19 |
+
model_id="o4-mini-2025-04-16",
|
20 |
+
api_key=os.environ["OPENAI_API_KEY"],
|
21 |
),
|
22 |
managed_agents=[web_agent, gmail_agent, bookmarks_agent, categoriser_agent],
|
23 |
name="digital_assistant",
|
|
|
43 |
"β’ Manually recategorize bookmarks when needed\n\n"
|
44 |
"I combine these capabilities to help you with research, information gathering, and digital organization tasks."
|
45 |
),
|
46 |
+
max_steps=6, # Reduced to prevent token overflow
|
47 |
additional_authorized_imports=["json"],
|
48 |
# Add planning to help with complex queries
|
49 |
planning_interval=3, # Plan every 3 steps to maintain focus
|
|
|
|
|
|
|
50 |
)
|
src/agents/web_agents.py
CHANGED
@@ -1,14 +1,14 @@
|
|
1 |
-
import os
|
2 |
from smolagents import (
|
3 |
ToolCallingAgent,
|
4 |
-
|
5 |
WebSearchTool,
|
6 |
)
|
7 |
|
|
|
8 |
|
9 |
-
model =
|
10 |
-
|
11 |
-
|
12 |
)
|
13 |
|
14 |
web_agent = ToolCallingAgent(
|
|
|
|
|
1 |
from smolagents import (
|
2 |
ToolCallingAgent,
|
3 |
+
OpenAIServerModel,
|
4 |
WebSearchTool,
|
5 |
)
|
6 |
|
7 |
+
import os
|
8 |
|
9 |
+
model = OpenAIServerModel(
|
10 |
+
model_id="o4-mini-2025-04-16",
|
11 |
+
api_key=os.environ["OPENAI_API_KEY"],
|
12 |
)
|
13 |
|
14 |
web_agent = ToolCallingAgent(
|
src/interfaces/gradio_interface.py
CHANGED
@@ -173,152 +173,101 @@ def create_categories_interface():
|
|
173 |
def create_about_interface():
|
174 |
"""Create the about page interface."""
|
175 |
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
## π₯ Project Demo Video
|
180 |
-
|
181 |
-
[](https://youtu.be/CD0j2dGVycs)
|
182 |
|
183 |
-
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
188 |
-
|
189 |
-
**ReMind** is your intelligent digital memory assistant that helps you rediscover, organize, and make sense of your accumulated AI and technology knowledge. In our information-rich world, we often bookmark valuable resources and receive important newsletters only to forget about them later. This system solves this problem by intelligently categorizing and surfacing your digital discoveries when you need them most.
|
190 |
-
"""
|
191 |
|
192 |
-
about_content = """
|
193 |
## π― What ReMind Does
|
194 |
|
195 |
### π **Smart Bookmark Management**
|
196 |
-
- Automatically
|
197 |
-
- Provides intelligent search and filtering capabilities
|
198 |
- Tracks bookmark statistics and usage patterns
|
199 |
-
- Focuses specifically on AI and technology resources
|
200 |
-
|
201 |
-
### π§ **Newsletter Email Integration**
|
202 |
-
- Securely accesses emails from trusted AI news sources (news@alphasignal.ai)
|
203 |
-
- Searches through AI newsletters and updates with intelligent filtering
|
204 |
-
- Extracts insights from email-based learning resources
|
205 |
-
- Provides recent email browsing and content reading capabilities
|
206 |
-
|
207 |
-
### π **Real-time Web Search**
|
208 |
-
- Performs live web searches for the latest AI and tech developments
|
209 |
-
- Combines cached knowledge with current information
|
210 |
-
- Supports up to 6-step search processes for comprehensive research
|
211 |
-
- Delivers real-time results and analysis
|
212 |
|
213 |
### π·οΈ **Intelligent Categorization**
|
214 |
-
|
215 |
-
|
216 |
-
1. **π¬ Research & Breakthroughs** - Latest papers
|
217 |
-
2. **π Model Releases & Updates** - New AI models
|
218 |
-
3. **π οΈ Tools, Frameworks & Platforms** - Developer
|
219 |
-
4. **π Applications & Industry Use Cases** - Real-world AI implementations
|
220 |
-
5. **βοΈ Regulation, Ethics & Policy** - AI governance
|
221 |
-
6. **π° Investment, Funding & M&A** - Market movements
|
222 |
-
7. **π Benchmarks & Leaderboards** - Performance comparisons
|
223 |
-
8. **π Community, Events & Education** - Learning resources
|
224 |
-
9. **π Security, Privacy & Safety** - AI safety
|
225 |
-
10. **π Market Trends & Analysis** - Industry insights
|
226 |
-
|
227 |
-
### π¬ **
|
228 |
-
-
|
229 |
-
-
|
230 |
-
-
|
231 |
-
-
|
232 |
-
|
|
|
|
|
|
|
|
|
233 |
|
234 |
---
|
235 |
|
236 |
## π§ How It Works
|
237 |
|
238 |
-
**ReMind** is powered by
|
239 |
|
240 |
-
- **π€ Multi-
|
241 |
-
- **π§ Real-time reasoning** - Watch AI
|
242 |
-
- **π Dynamic categorization** -
|
243 |
-
- **π Semantic search** - Find resources
|
244 |
-
- **πΎ Local caching** - Efficient JSON-based storage for offline access
|
245 |
|
246 |
---
|
247 |
|
248 |
## π Getting Started
|
249 |
|
250 |
-
1. **
|
251 |
-
2. **Categorize Content**:
|
252 |
-
3. **
|
253 |
-
4. **Search
|
254 |
-
5. **Stay
|
255 |
|
256 |
---
|
257 |
|
258 |
## π Privacy & Security
|
259 |
|
260 |
-
- **Local
|
261 |
-
- **Selective Email Access**: Only accesses specified trusted email sources
|
262 |
-
- **
|
263 |
-
- **Transparent Operations**: All
|
264 |
-
- **No Data Sharing**: Personal information processed locally with secure authentication
|
265 |
|
266 |
---
|
267 |
|
268 |
## π‘ Why ReMind?
|
269 |
|
270 |
-
In the fast-moving world of AI and technology, staying informed while managing information overload is challenging.
|
271 |
-
|
272 |
-
- **Surfaces forgotten resources** from your Chrome bookmarks
|
273 |
-
- **Organizes email newsletters** into actionable intelligence
|
274 |
-
- **Combines multiple sources** for comprehensive AI knowledge management
|
275 |
-
- **Provides real-time updates** through web search integration
|
276 |
-
- **Learns and adapts** through intelligent categorization and recategorization
|
277 |
-
|
278 |
-
---
|
279 |
-
|
280 |
-
## π Acknowledgments
|
281 |
|
282 |
-
|
|
|
|
|
|
|
|
|
283 |
|
284 |
---
|
285 |
|
286 |
-
*"The
|
287 |
|
288 |
-
**Welcome to ReMind - where your digital
|
289 |
"""
|
290 |
|
291 |
with gr.Blocks() as about_tab:
|
292 |
-
gr.Markdown(intro_content)
|
293 |
-
|
294 |
-
# Add the AI architecture diagram
|
295 |
-
gr.Markdown("## ποΈ System Architecture")
|
296 |
-
gr.Image(
|
297 |
-
value="data/ai_diagram.png",
|
298 |
-
label="ReMind AI System Architecture",
|
299 |
-
show_label=True,
|
300 |
-
show_download_button=True,
|
301 |
-
height=400,
|
302 |
-
width=None,
|
303 |
-
interactive=False,
|
304 |
-
)
|
305 |
-
gr.Markdown("*System architecture showing the multi-agent orchestration and data flow in ReMind*")
|
306 |
gr.Markdown(about_content)
|
307 |
-
return about_tab
|
308 |
|
309 |
-
|
310 |
-
def sanitize_content(content):
|
311 |
-
"""Sanitize content to ensure it's a clean string without complex objects"""
|
312 |
-
if isinstance(content, str):
|
313 |
-
return content
|
314 |
-
elif isinstance(content, dict):
|
315 |
-
# If content is a dict, convert to string representation
|
316 |
-
return str(content)
|
317 |
-
elif hasattr(content, "__dict__"):
|
318 |
-
# If it's an object with attributes, convert to string
|
319 |
-
return str(content)
|
320 |
-
else:
|
321 |
-
return str(content)
|
322 |
|
323 |
|
324 |
def validate_message_history(history):
|
@@ -326,12 +275,10 @@ def validate_message_history(history):
|
|
326 |
validated = []
|
327 |
for msg in history:
|
328 |
if isinstance(msg, dict) and "role" in msg and "content" in msg:
|
329 |
-
# Ensure content is a string
|
330 |
-
|
331 |
-
|
332 |
-
|
333 |
-
clean_msg = {"role": str(msg["role"]), "content": content}
|
334 |
-
validated.append(clean_msg)
|
335 |
else:
|
336 |
print(f"Warning: Invalid message format detected: {msg}")
|
337 |
return validated
|
@@ -352,23 +299,20 @@ def chat_with_agent(message: str, history: List) -> Generator[List, None, None]:
|
|
352 |
if isinstance(item, dict):
|
353 |
# Already a dict, check if it has required keys
|
354 |
if "role" in item and "content" in item:
|
355 |
-
|
356 |
-
content = sanitize_content(item["content"])
|
357 |
-
formatted_history.append({"role": str(item["role"]), "content": content})
|
358 |
else:
|
359 |
# Skip malformed dict items
|
360 |
print(f"Warning: Skipping malformed history item: {item}")
|
361 |
continue
|
362 |
elif hasattr(item, "role") and hasattr(item, "content"):
|
363 |
-
# ChatMessage object - convert to dict
|
364 |
-
content
|
365 |
-
formatted_history.append({"role": str(item.role), "content": content})
|
366 |
elif isinstance(item, (list, tuple)) and len(item) == 2:
|
367 |
# Legacy format: [user_message, assistant_message] or (user, assistant)
|
368 |
# Convert to proper message format
|
369 |
if isinstance(item[0], str) and isinstance(item[1], str):
|
370 |
-
formatted_history.append({"role": "user", "content":
|
371 |
-
formatted_history.append({"role": "assistant", "content":
|
372 |
else:
|
373 |
print(f"Warning: Skipping malformed history item: {item}")
|
374 |
continue
|
@@ -383,10 +327,10 @@ def chat_with_agent(message: str, history: List) -> Generator[List, None, None]:
|
|
383 |
# Start with user message in history
|
384 |
new_history = formatted_history.copy()
|
385 |
|
386 |
-
# Show initial thinking message
|
387 |
thinking_message = {
|
388 |
"role": "assistant",
|
389 |
-
"content": "
|
390 |
}
|
391 |
new_history.append(thinking_message)
|
392 |
yield validate_message_history(new_history)
|
@@ -404,16 +348,16 @@ def chat_with_agent(message: str, history: List) -> Generator[List, None, None]:
|
|
404 |
for step in agent_stream:
|
405 |
step_count += 1
|
406 |
|
407 |
-
# Update thinking message with current step info
|
408 |
if hasattr(step, "step_number") and hasattr(step, "action"):
|
409 |
-
step_content = "
|
410 |
-
step_content += f"
|
411 |
|
412 |
if hasattr(step, "thought") and step.thought:
|
413 |
-
step_content += f"π **Thought:** {
|
414 |
|
415 |
if hasattr(step, "action") and step.action:
|
416 |
-
step_content += f"π οΈ **Action:** {
|
417 |
|
418 |
if hasattr(step, "observations") and step.observations:
|
419 |
obs_text = str(step.observations)[:300]
|
@@ -421,10 +365,7 @@ def chat_with_agent(message: str, history: List) -> Generator[List, None, None]:
|
|
421 |
obs_text += "..."
|
422 |
step_content += f"ποΈ **Observation:** {obs_text}\n\n"
|
423 |
|
424 |
-
|
425 |
-
|
426 |
-
# Ensure the content is a clean string
|
427 |
-
thinking_message = {"role": "assistant", "content": str(step_content)}
|
428 |
new_history[-1] = thinking_message
|
429 |
yield validate_message_history(new_history)
|
430 |
|
@@ -432,10 +373,7 @@ def chat_with_agent(message: str, history: List) -> Generator[List, None, None]:
|
|
432 |
# If streaming fails, fall back to regular execution
|
433 |
print(f"Streaming failed: {stream_error}, falling back to regular execution")
|
434 |
|
435 |
-
thinking_message =
|
436 |
-
"role": "assistant",
|
437 |
-
"content": "β‘ **Agent Working** π\n\nπ« Processing your request using available tools...\n\nβ³ *Please wait...*",
|
438 |
-
}
|
439 |
new_history[-1] = thinking_message
|
440 |
yield validate_message_history(new_history)
|
441 |
|
@@ -489,35 +427,28 @@ def chat_with_agent(message: str, history: List) -> Generator[List, None, None]:
|
|
489 |
tool_usage_content = "Agent executed actions successfully"
|
490 |
|
491 |
# Update thinking to show completion
|
492 |
-
thinking_message =
|
493 |
-
"
|
494 |
-
|
495 |
-
}
|
496 |
new_history[-1] = thinking_message
|
497 |
yield validate_message_history(new_history)
|
498 |
|
499 |
# Add tool usage message if there were tools used
|
500 |
if tool_usage_content:
|
501 |
-
tool_message = {
|
502 |
-
"role": "assistant",
|
503 |
-
"content": f"π οΈ **Tools & Actions Used**\n\n{str(tool_usage_content)}",
|
504 |
-
}
|
505 |
new_history.append(tool_message)
|
506 |
yield validate_message_history(new_history)
|
507 |
|
508 |
# Add final response
|
509 |
final_response = str(result) if result else "I couldn't process your request."
|
510 |
-
final_message = {"role": "assistant", "content":
|
511 |
new_history.append(final_message)
|
512 |
yield validate_message_history(new_history)
|
513 |
return
|
514 |
|
515 |
# If we get here, streaming worked, so get the final result
|
516 |
# The streaming should have shown all the steps, now get final answer
|
517 |
-
thinking_message =
|
518 |
-
"role": "assistant",
|
519 |
-
"content": "β
**Agent Complete** π\n\nβ
All steps executed\nβ
Preparing final response",
|
520 |
-
}
|
521 |
new_history[-1] = thinking_message
|
522 |
yield validate_message_history(new_history)
|
523 |
|
@@ -529,7 +460,7 @@ def chat_with_agent(message: str, history: List) -> Generator[List, None, None]:
|
|
529 |
if hasattr(last_step, "observations") and last_step.observations:
|
530 |
final_response = str(last_step.observations)
|
531 |
|
532 |
-
final_message = {"role": "assistant", "content":
|
533 |
new_history.append(final_message)
|
534 |
yield validate_message_history(new_history)
|
535 |
|
@@ -551,7 +482,7 @@ def chat_with_agent(message: str, history: List) -> Generator[List, None, None]:
|
|
551 |
chat_interface = gr.ChatInterface(
|
552 |
fn=chat_with_agent,
|
553 |
type="messages",
|
554 |
-
title="π
|
555 |
description="""
|
556 |
## Your Comprehensive AI Assistant! π€
|
557 |
|
@@ -579,11 +510,13 @@ chat_interface = gr.ChatInterface(
|
|
579 |
- Research topics and gather up-to-date data
|
580 |
|
581 |
---
|
582 |
-
|
|
|
|
|
583 |
""",
|
584 |
examples=[
|
585 |
-
"π§ Show me my latest 5 emails",
|
586 |
"π Search my AI bookmarks",
|
|
|
587 |
"π€ Find emails about AI",
|
588 |
"π Search for latest AI news",
|
589 |
"π What AI resources do I have?",
|
@@ -596,7 +529,6 @@ chat_interface = gr.ChatInterface(
|
|
596 |
"π οΈ Find tools and frameworks bookmarks",
|
597 |
],
|
598 |
show_progress="hidden",
|
599 |
-
cache_examples=False,
|
600 |
)
|
601 |
|
602 |
# Create categories and about interfaces
|
|
|
173 |
def create_about_interface():
|
174 |
"""Create the about page interface."""
|
175 |
|
176 |
+
about_content = """
|
177 |
+
# π§ About ReMind
|
|
|
|
|
|
|
|
|
178 |
|
179 |
+
## Bring your past to mind.
|
180 |
|
181 |
+
**ReMind** is your intelligent digital memory assistant that helps you rediscover, organize, and make sense of your accumulated digital knowledge. In our information-rich world, we often bookmark valuable resources only to forget about them later. ReMind solves this problem by intelligently categorizing and surfacing your digital discoveries when you need them most.
|
182 |
|
183 |
+
---
|
|
|
|
|
|
|
184 |
|
|
|
185 |
## π― What ReMind Does
|
186 |
|
187 |
### π **Smart Bookmark Management**
|
188 |
+
- Automatically imports and manages your Chrome bookmarks
|
189 |
+
- Provides intelligent search and filtering capabilities
|
190 |
- Tracks bookmark statistics and usage patterns
|
191 |
+
- Focuses specifically on AI and technology resources
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
192 |
|
193 |
### π·οΈ **Intelligent Categorization**
|
194 |
+
ReMind automatically organizes your bookmarks into **10 key AI categories**:
|
195 |
+
|
196 |
+
1. **π¬ Research & Breakthroughs** - Latest papers and theoretical advances
|
197 |
+
2. **π Model Releases & Updates** - New AI models and version updates
|
198 |
+
3. **π οΈ Tools, Frameworks & Platforms** - Developer tools and SDKs
|
199 |
+
4. **π Applications & Industry Use Cases** - Real-world AI implementations
|
200 |
+
5. **βοΈ Regulation, Ethics & Policy** - AI governance and ethical considerations
|
201 |
+
6. **π° Investment, Funding & M&A** - Market movements and startup funding
|
202 |
+
7. **π Benchmarks & Leaderboards** - Performance comparisons and competitions
|
203 |
+
8. **π Community, Events & Education** - Learning resources and conferences
|
204 |
+
9. **π Security, Privacy & Safety** - AI safety and security research
|
205 |
+
10. **π Market Trends & Analysis** - Industry insights and forecasts
|
206 |
+
|
207 |
+
### π¬ **Conversational Interface**
|
208 |
+
- Chat naturally with your AI assistant about your bookmarks
|
209 |
+
- Ask questions like "Show me my latest AI tools" or "Find research about transformers"
|
210 |
+
- Get contextual recommendations based on your interests
|
211 |
+
- Real-time thinking process visualization
|
212 |
+
|
213 |
+
### π§ **Email Integration**
|
214 |
+
- Browse and search through your important emails
|
215 |
+
- Focus on AI newsletters and updates from trusted sources
|
216 |
+
- Extract insights from your email-based learning resources
|
217 |
|
218 |
---
|
219 |
|
220 |
## π§ How It Works
|
221 |
|
222 |
+
**ReMind** is powered by **Smolagents**, a modern AI agent framework that enables:
|
223 |
|
224 |
+
- **π€ Multi-tool orchestration** - Seamlessly combines bookmark management, email access, and web search
|
225 |
+
- **π§ Real-time reasoning** - Watch the AI think through problems step-by-step
|
226 |
+
- **π Dynamic categorization** - Continuously learns and improves bookmark organization
|
227 |
+
- **π Semantic search** - Find resources based on meaning, not just keywords
|
|
|
228 |
|
229 |
---
|
230 |
|
231 |
## π Getting Started
|
232 |
|
233 |
+
1. **Load Your Bookmarks**: Use the chat interface to import your Chrome bookmarks
|
234 |
+
2. **Categorize Content**: Ask ReMind to automatically categorize your AI resources
|
235 |
+
3. **Explore Categories**: Browse organized categories in the Categories Dashboard
|
236 |
+
4. **Search & Discover**: Use natural language to find specific resources
|
237 |
+
5. **Stay Updated**: Let ReMind help you track new developments in AI
|
238 |
|
239 |
---
|
240 |
|
241 |
## π Privacy & Security
|
242 |
|
243 |
+
- **Local Processing**: Your bookmarks are processed and stored locally
|
244 |
+
- **Selective Email Access**: Only accesses specified trusted email sources
|
245 |
+
- **No Data Sharing**: Your personal information stays on your device
|
246 |
+
- **Transparent Operations**: All AI operations are visible and explainable
|
|
|
247 |
|
248 |
---
|
249 |
|
250 |
## π‘ Why ReMind?
|
251 |
|
252 |
+
In the fast-moving world of AI and technology, staying informed while managing information overload is challenging. ReMind transforms your passive bookmark collection into an active, intelligent knowledge base that:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
253 |
|
254 |
+
- **Surfaces forgotten gems** from your browsing history
|
255 |
+
- **Identifies patterns** in your learning journey
|
256 |
+
- **Suggests connections** between different resources
|
257 |
+
- **Keeps you organized** without manual effort
|
258 |
+
- **Learns your interests** and adapts over time
|
259 |
|
260 |
---
|
261 |
|
262 |
+
*"The palest ink is better than the best memory, but the smartest AI makes both ink and memory work together."*
|
263 |
|
264 |
+
**Welcome to ReMind - where your digital past becomes your future advantage.**
|
265 |
"""
|
266 |
|
267 |
with gr.Blocks() as about_tab:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
268 |
gr.Markdown(about_content)
|
|
|
269 |
|
270 |
+
return about_tab
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
271 |
|
272 |
|
273 |
def validate_message_history(history):
|
|
|
275 |
validated = []
|
276 |
for msg in history:
|
277 |
if isinstance(msg, dict) and "role" in msg and "content" in msg:
|
278 |
+
# Ensure content is a string
|
279 |
+
if not isinstance(msg["content"], str):
|
280 |
+
msg["content"] = str(msg["content"])
|
281 |
+
validated.append(msg)
|
|
|
|
|
282 |
else:
|
283 |
print(f"Warning: Invalid message format detected: {msg}")
|
284 |
return validated
|
|
|
299 |
if isinstance(item, dict):
|
300 |
# Already a dict, check if it has required keys
|
301 |
if "role" in item and "content" in item:
|
302 |
+
formatted_history.append(item)
|
|
|
|
|
303 |
else:
|
304 |
# Skip malformed dict items
|
305 |
print(f"Warning: Skipping malformed history item: {item}")
|
306 |
continue
|
307 |
elif hasattr(item, "role") and hasattr(item, "content"):
|
308 |
+
# ChatMessage object - convert to dict
|
309 |
+
formatted_history.append({"role": item.role, "content": item.content})
|
|
|
310 |
elif isinstance(item, (list, tuple)) and len(item) == 2:
|
311 |
# Legacy format: [user_message, assistant_message] or (user, assistant)
|
312 |
# Convert to proper message format
|
313 |
if isinstance(item[0], str) and isinstance(item[1], str):
|
314 |
+
formatted_history.append({"role": "user", "content": item[0]})
|
315 |
+
formatted_history.append({"role": "assistant", "content": item[1]})
|
316 |
else:
|
317 |
print(f"Warning: Skipping malformed history item: {item}")
|
318 |
continue
|
|
|
327 |
# Start with user message in history
|
328 |
new_history = formatted_history.copy()
|
329 |
|
330 |
+
# Show initial thinking message
|
331 |
thinking_message = {
|
332 |
"role": "assistant",
|
333 |
+
"content": "π§ **Agent Planning**\n\nAnalyzing your request and creating execution plan...",
|
334 |
}
|
335 |
new_history.append(thinking_message)
|
336 |
yield validate_message_history(new_history)
|
|
|
348 |
for step in agent_stream:
|
349 |
step_count += 1
|
350 |
|
351 |
+
# Update thinking message with current step info
|
352 |
if hasattr(step, "step_number") and hasattr(step, "action"):
|
353 |
+
step_content = "π§ **Agent Planning & Execution**\n\n"
|
354 |
+
step_content += f"**Step {step.step_number}:**\n"
|
355 |
|
356 |
if hasattr(step, "thought") and step.thought:
|
357 |
+
step_content += f"π **Thought:** {step.thought}\n\n"
|
358 |
|
359 |
if hasattr(step, "action") and step.action:
|
360 |
+
step_content += f"π οΈ **Action:** {step.action}\n\n"
|
361 |
|
362 |
if hasattr(step, "observations") and step.observations:
|
363 |
obs_text = str(step.observations)[:300]
|
|
|
365 |
obs_text += "..."
|
366 |
step_content += f"ποΈ **Observation:** {obs_text}\n\n"
|
367 |
|
368 |
+
thinking_message["content"] = step_content
|
|
|
|
|
|
|
369 |
new_history[-1] = thinking_message
|
370 |
yield validate_message_history(new_history)
|
371 |
|
|
|
373 |
# If streaming fails, fall back to regular execution
|
374 |
print(f"Streaming failed: {stream_error}, falling back to regular execution")
|
375 |
|
376 |
+
thinking_message["content"] = "π§ **Agent Working**\n\nProcessing your request using available tools..."
|
|
|
|
|
|
|
377 |
new_history[-1] = thinking_message
|
378 |
yield validate_message_history(new_history)
|
379 |
|
|
|
427 |
tool_usage_content = "Agent executed actions successfully"
|
428 |
|
429 |
# Update thinking to show completion
|
430 |
+
thinking_message["content"] = (
|
431 |
+
"π§ **Agent Complete**\n\nβ
Request processed successfully\nβ
Response prepared"
|
432 |
+
)
|
|
|
433 |
new_history[-1] = thinking_message
|
434 |
yield validate_message_history(new_history)
|
435 |
|
436 |
# Add tool usage message if there were tools used
|
437 |
if tool_usage_content:
|
438 |
+
tool_message = {"role": "assistant", "content": f"π οΈ **Tools & Actions Used**\n\n{tool_usage_content}"}
|
|
|
|
|
|
|
439 |
new_history.append(tool_message)
|
440 |
yield validate_message_history(new_history)
|
441 |
|
442 |
# Add final response
|
443 |
final_response = str(result) if result else "I couldn't process your request."
|
444 |
+
final_message = {"role": "assistant", "content": final_response}
|
445 |
new_history.append(final_message)
|
446 |
yield validate_message_history(new_history)
|
447 |
return
|
448 |
|
449 |
# If we get here, streaming worked, so get the final result
|
450 |
# The streaming should have shown all the steps, now get final answer
|
451 |
+
thinking_message["content"] = "π§ **Agent Complete**\n\nβ
All steps executed\nβ
Preparing final response"
|
|
|
|
|
|
|
452 |
new_history[-1] = thinking_message
|
453 |
yield validate_message_history(new_history)
|
454 |
|
|
|
460 |
if hasattr(last_step, "observations") and last_step.observations:
|
461 |
final_response = str(last_step.observations)
|
462 |
|
463 |
+
final_message = {"role": "assistant", "content": final_response}
|
464 |
new_history.append(final_message)
|
465 |
yield validate_message_history(new_history)
|
466 |
|
|
|
482 |
chat_interface = gr.ChatInterface(
|
483 |
fn=chat_with_agent,
|
484 |
type="messages",
|
485 |
+
title="π Digital Assistant - Powered by Smolagents",
|
486 |
description="""
|
487 |
## Your Comprehensive AI Assistant! π€
|
488 |
|
|
|
510 |
- Research topics and gather up-to-date data
|
511 |
|
512 |
---
|
513 |
+
**π Security Note:** Email read access is limited to `habib.adoum01@gmail.com` and `news@alphasignal.ai`
|
514 |
+
|
515 |
+
**π‘ Watch the agent think in real-time** - You'll see my reasoning process, tool selection, and execution steps in collapsible sections!
|
516 |
""",
|
517 |
examples=[
|
|
|
518 |
"π Search my AI bookmarks",
|
519 |
+
"π§ Show me my latest 5 emails",
|
520 |
"π€ Find emails about AI",
|
521 |
"π Search for latest AI news",
|
522 |
"π What AI resources do I have?",
|
|
|
529 |
"π οΈ Find tools and frameworks bookmarks",
|
530 |
],
|
531 |
show_progress="hidden",
|
|
|
532 |
)
|
533 |
|
534 |
# Create categories and about interfaces
|