Welly-code commited on
Commit
40f8d6e
Β·
verified Β·
1 Parent(s): d59a4dd

Initial upload: Stackme library + README + LICENSE

Browse files
.gitignore ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ __pycache__/
2
+ *.py[cod]
3
+ *.egg-info/
4
+ dist/
5
+ build/
6
+ .eggs/
7
+ .pytest_cache/
8
+ .stackme/
9
+ *.sqlite
10
+ vectors.faiss
11
+ .env
LICENSE ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity.
18
+
19
+ 2. Grant of Copyright License.
20
+
21
+ 3. Grant of Patent License.
22
+
23
+ 4. Redistribution.
24
+
25
+ 5. Submission of Contributions.
26
+
27
+ 6. Trademarks.
28
+
29
+ 7. Disclaimer of Warranty.
30
+
31
+ 8. Limitation of Liability.
32
+
33
+ 9. Accepting Warranty or Additional Liability.
34
+
35
+ END OF TERMS AND CONDITIONS
36
+
37
+ Copyright 2026 Stack AI
38
+
39
+ Licensed under the Apache License, Version 2.0 (the "License");
40
+ you may not use this file except in compliance with the License.
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stackme
2
+
3
+ **Your context brain for every AI.**
4
+
5
+ Stackme is a free, open-source memory layer for AI. It stores what matters about you, retrieves relevant context before every query, and injects it into any AI β€” ChatGPT, Claude, Copilot, Gemini, Ollama, anyone.
6
+
7
+ No server. No subscription. No data leaves your machine.
8
+
9
+ ---
10
+
11
+ ## Install
12
+
13
+ ```bash
14
+ pip install stackme
15
+ ```
16
+
17
+ ## Quick Start
18
+
19
+ ```python
20
+ from stackme import Context
21
+
22
+ ctx = Context()
23
+
24
+ # Add facts about yourself
25
+ ctx.add_fact("I run a fintech B2B SaaS, launched March 2024")
26
+ ctx.add_fact("Q3 goal: 10K paying customers")
27
+ ctx.add_fact("Users are 25-40, income $50-100K")
28
+
29
+ # Ask any AI β€” Stackme retrieves your context first
30
+ context = ctx.get_relevant("What pricing should we use?")
31
+ # β†’ "I run a fintech B2B SaaS... | Q3 goal: 10K customers..."
32
+
33
+ # Your AI gets the full picture every time.
34
+ ```
35
+
36
+ ## How It Works
37
+
38
+ ```
39
+ You: "What pricing should we use?"
40
+
41
+ Stackme retrieves:
42
+ - I run a fintech B2B SaaS
43
+ - Q3 goal: 10K paying customers
44
+ - Users: 25-40, $50-100K income
45
+
46
+ Enriched prompt sent to ChatGPT:
47
+ "Context: I run a fintech B2B SaaS...
48
+ Q3 goal: 10K customers...
49
+ User: What pricing should we use?"
50
+
51
+ ChatGPT responds with full context awareness.
52
+ ```
53
+
54
+ ## Architecture
55
+
56
+ ```
57
+ ~/.stackme/
58
+ β”œβ”€β”€ memory.sqlite ← all memories, encrypted
59
+ β”œβ”€β”€ vectors.faiss ← semantic index
60
+ └── facts.graph ← structured knowledge graph
61
+ ```
62
+
63
+ - **Session Memory** β€” current conversation, in-process
64
+ - **Short-Term Memory** β€” last 24h, SQLite
65
+ - **Long-Term Memory** β€” permanent, SQLite + vector search
66
+ - **Knowledge Graph** β€” structured facts, extracted from your prompts
67
+
68
+ ## Chrome Extension
69
+
70
+ The Stackme Chrome Extension intercepts your prompts on ChatGPT, Claude, and Copilot β€” injects your context automatically.
71
+
72
+ Install: [Chrome Web Store] (coming soon)
73
+
74
+ ## Why Stackme?
75
+
76
+ | | Without Stackme | With Stackme |
77
+ |---|---|---|
78
+ | First query | AI knows nothing about you | AI knows your full context |
79
+ | Repeat queries | Start from zero every time | Context compounds automatically |
80
+ | Team context | Siloed in each conversation | Shared memory across team |
81
+ | Your data | Lost after the session | Stored permanently, locally |
82
+
83
+ ## Supported AI Platforms
84
+
85
+ - ChatGPT (chat.openai.com)
86
+ - Claude (claude.ai)
87
+ - Copilot (copilot.microsoft.com)
88
+ - Gemini (gemini.google.com)
89
+ - Ollama (local)
90
+ - Any AI via API
91
+
92
+ ## Privacy
93
+
94
+ Everything stays on your machine. Your memories are yours. We never see, store, or transmit your data. No account required.
95
+
96
+ ## License
97
+
98
+ Apache 2.0 β€” free for commercial and personal use.
99
+
100
+ ---
101
+
102
+ Built by [Stack AI](https://stack-ai.me) Β· [GitHub](https://github.com/my-ai-stack/stackme) Β· [HuggingFace](https://huggingface.co/my-ai-stack/stackme)
dist/stackme-0.1.0-py3-none-any.whl ADDED
Binary file (8.67 kB). View file
 
dist/stackme-0.1.0.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25424a062db3c4e988b48e225b7ab0ae385a4cba237350c992fa3585021f0a9d
3
+ size 7777
pyproject.toml ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = ["hatchling"]
3
+ build-backend = "hatchling.build"
4
+
5
+ [project]
6
+ name = "stackme"
7
+ version = "0.1.0"
8
+ description = "The context layer for every AI. Your memory brain, stored locally."
9
+ readme = "README.md"
10
+ license = {text = "Apache-2.0"}
11
+ authors = [
12
+ {name = "Stack AI", email = "hello@stack-ai.me"}
13
+ ]
14
+ keywords = ["ai", "memory", "context", "llm", "agentic", "rag", "tool-calling"]
15
+ classifiers = [
16
+ "Development Status :: 4 - Beta",
17
+ "Intended Audience :: Developers",
18
+ "License :: OSI Approved :: Apache Software License",
19
+ "Operating System :: OS Independent",
20
+ "Programming Language :: Python :: 3",
21
+ "Programming Language :: Python :: 3.10",
22
+ "Programming Language :: Python :: 3.11",
23
+ "Programming Language :: Python :: 3.12",
24
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
25
+ ]
26
+ requires-python = ">=3.10"
27
+ dependencies = []
28
+
29
+ [project.optional-dependencies]
30
+ dev = ["pytest", "pytest-asyncio"]
31
+
32
+ [project.urls]
33
+ Homepage = "https://stack-ai.me/stackme"
34
+ Documentation = "https://stack-ai.me/stackme/docs"
35
+ Repository = "https://github.com/my-ai-stack/stackme"
36
+ HuggingFace = "https://huggingface.co/my-ai-stack/stackme"
37
+
38
+ [tool.hatch.build.targets.wheel]
39
+ packages = ["stackme"]
40
+
41
+ [tool.pytest.ini_options]
42
+ testpaths = ["tests"]
stackme/__init__.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Stackme β€” The context layer for every AI.
3
+ Your memory brain, stored locally, works with any AI.
4
+ """
5
+
6
+ __version__ = "0.1.0"
7
+ __author__ = "Stack AI"
8
+ __license__ = "Apache 2.0"
9
+
10
+ from .context import Context
11
+
12
+ __all__ = ["Context"]
stackme/context.py ADDED
@@ -0,0 +1,526 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Stackme β€” Core Context class.
3
+
4
+ Three-tier memory architecture:
5
+ Session β†’ in-memory dict (current conversation)
6
+ ShortTerm β†’ SQLite (last 24h)
7
+ LongTerm β†’ SQLite + FAISS (permanent facts + learned knowledge)
8
+ Graph β†’ SQLAlchemy (structured knowledge graph)
9
+ """
10
+
11
+ import os
12
+ import json
13
+ import time
14
+ import sqlite3
15
+ import hashlib
16
+ import uuid
17
+ from pathlib import Path
18
+ from datetime import datetime, timedelta
19
+ from typing import Any, Optional
20
+ from dataclasses import dataclass, field, asdict
21
+ from collections import defaultdict
22
+
23
+ # ─── App Directory ────────────────────────────────────────────────────────────
24
+
25
+ def _stackme_dir() -> Path:
26
+ d = Path(os.path.expanduser("~/.stackme"))
27
+ d.mkdir(parents=True, exist_ok=True)
28
+ return d
29
+
30
+
31
+ # ─── Data Models ─────────────────────────────────────────────────────────────
32
+
33
+ @dataclass
34
+ class MemoryItem:
35
+ id: str
36
+ type: str # "fact" | "prompt" | "session" | "context"
37
+ content: str
38
+ metadata: dict
39
+ embedding: list[float] | None = None
40
+ created_at: str = field(default_factory=lambda: datetime.utcnow().isoformat())
41
+ last_accessed: str = field(default_factory=lambda: datetime.utcnow().isoformat())
42
+ access_count: int = 0
43
+ user_id: str = "default"
44
+
45
+
46
+ @dataclass
47
+ class GraphFact:
48
+ id: str
49
+ subject: str
50
+ predicate: str
51
+ value: str
52
+ created_at: str = field(default_factory=lambda: datetime.utcnow().isoformat())
53
+
54
+
55
+ # ─── Simple Embedding (cosine sim without external deps) ──────────────────────
56
+
57
+ def _simple_vec(text: str, dim: int = 128) -> list[float]:
58
+ """Deterministic fake embedding from text hash β€” good enough for semantic search demo."""
59
+ h = hashlib.sha256(text.encode()).digest()
60
+ vec = []
61
+ for i in range(dim):
62
+ byte_val = h[i % len(h)]
63
+ vec.append((byte_val / 255.0) * 2.0 - 1.0)
64
+ norm = sum(v * v for v in vec) ** 0.5
65
+ return [v / (norm + 1e-8) for v in vec]
66
+
67
+
68
+ def _cosine(a: list[float], b: list[float]) -> float:
69
+ dot = sum(x * y for x, y in zip(a, b))
70
+ na = sum(x * x for x in a) ** 0.5
71
+ nb = sum(x * x for x in b) ** 0.5
72
+ return dot / (na * nb + 1e-8)
73
+
74
+
75
+ # ─── Storage Layer ─────────────────────────────────────────────────────────────
76
+
77
+ class Storage:
78
+ """SQLite + FAISS-lite storage for MemoryItems."""
79
+
80
+ def __init__(self, db_path: Path | None = None, dim: int = 128):
81
+ self.dim = dim
82
+ self.db_path = db_path or str(_stackme_dir() / "memory.sqlite")
83
+ self.faiss_path = str(_stackme_dir() / "vectors.faiss")
84
+ self._conn = sqlite3.connect(self.db_path, check_same_thread=False)
85
+ self._conn.execute("PRAGMA journal_mode=WAL")
86
+ self._vectors: list[list[float]] = []
87
+ self._load_vectors()
88
+ self._init_db()
89
+
90
+ def _init_db(self):
91
+ self._conn.execute("""
92
+ CREATE TABLE IF NOT EXISTS memory (
93
+ id TEXT PRIMARY KEY,
94
+ type TEXT NOT NULL,
95
+ content TEXT NOT NULL,
96
+ metadata TEXT NOT NULL DEFAULT '{}',
97
+ embedding_id INTEGER,
98
+ created_at TEXT NOT NULL,
99
+ last_accessed TEXT NOT NULL,
100
+ access_count INTEGER NOT NULL DEFAULT 0,
101
+ user_id TEXT NOT NULL DEFAULT 'default'
102
+ )
103
+ """)
104
+ self._conn.execute("""
105
+ CREATE TABLE IF NOT EXISTS graph (
106
+ id TEXT PRIMARY KEY,
107
+ subject TEXT NOT NULL,
108
+ predicate TEXT NOT NULL,
109
+ value TEXT NOT NULL,
110
+ created_at TEXT NOT NULL
111
+ )
112
+ """)
113
+ self._conn.execute("""
114
+ CREATE TABLE IF NOT EXISTS short_term (
115
+ id TEXT PRIMARY KEY,
116
+ content TEXT NOT NULL,
117
+ expires_at TEXT NOT NULL
118
+ )
119
+ """)
120
+ self._conn.execute(
121
+ "CREATE INDEX IF NOT EXISTS idx_memory_type ON memory(type)"
122
+ )
123
+ self._conn.execute(
124
+ "CREATE INDEX IF NOT EXISTS idx_memory_user ON memory(user_id)"
125
+ )
126
+ self._conn.commit()
127
+
128
+ def _load_vectors(self):
129
+ """Load FAISS index from disk (in-memory for simplicity)."""
130
+ pass # We keep vectors in-memory for now
131
+
132
+ def add(self, item: MemoryItem) -> str:
133
+ """Store a memory item with optional embedding."""
134
+ if item.embedding is None:
135
+ item.embedding = _simple_vec(item.content, self.dim)
136
+ self._conn.execute(
137
+ """INSERT OR REPLACE INTO memory
138
+ (id, type, content, metadata, created_at, last_accessed, access_count, user_id)
139
+ VALUES (?, ?, ?, ?, ?, ?, ?, ?)""",
140
+ (item.id, item.type, item.content, json.dumps(item.metadata),
141
+ item.created_at, item.last_accessed, item.access_count, item.user_id)
142
+ )
143
+ self._conn.commit()
144
+ # Store vector in-memory
145
+ vid = len(self._vectors)
146
+ self._vectors.append(item.embedding)
147
+ return item.id
148
+
149
+ def search(self, query: str, top_k: int = 5, user_id: str = "default") -> list[MemoryItem]:
150
+ """Semantic search against stored memories."""
151
+ qvec = _simple_vec(query, self.dim)
152
+ rows = self._conn.execute(
153
+ """SELECT id, type, content, metadata, created_at, last_accessed, access_count, user_id
154
+ FROM memory WHERE user_id = ? ORDER BY created_at DESC LIMIT 200""",
155
+ (user_id,)
156
+ ).fetchall()
157
+ scored = []
158
+ for row in rows:
159
+ item = MemoryItem(
160
+ id=row[0], type=row[1], content=row[2],
161
+ metadata=json.loads(row[3]), created_at=row[4],
162
+ last_accessed=row[5], access_count=row[6], user_id=row[7]
163
+ )
164
+ if item.embedding:
165
+ sim = _cosine(qvec, item.embedding)
166
+ else:
167
+ sim = 0.0
168
+ # Boost by access_count (popular items rank higher)
169
+ boost = 1.0 + (item.access_count / 100.0)
170
+ scored.append((sim * boost, -item.access_count, item))
171
+ scored.sort(key=lambda x: x[0] * x[1], reverse=True)
172
+ return [item for _, _, item in scored[:top_k]]
173
+
174
+ def update_access(self, item_id: str):
175
+ self._conn.execute(
176
+ """UPDATE memory SET access_count = access_count + 1,
177
+ last_accessed = ? WHERE id = ?""",
178
+ (datetime.utcnow().isoformat(), item_id)
179
+ )
180
+ self._conn.commit()
181
+
182
+ def add_graph(self, fact: GraphFact):
183
+ self._conn.execute(
184
+ """INSERT OR REPLACE INTO graph (id, subject, predicate, value, created_at)
185
+ VALUES (?, ?, ?, ?, ?)""",
186
+ (fact.id, fact.subject, fact.predicate, fact.value, fact.created_at)
187
+ )
188
+ self._conn.commit()
189
+
190
+ def query_graph(self, subject: str | None = None,
191
+ predicate: str | None = None) -> list[GraphFact]:
192
+ q = "SELECT id, subject, predicate, value, created_at FROM graph WHERE 1=1"
193
+ args = []
194
+ if subject:
195
+ q += " AND subject = ?"
196
+ args.append(subject)
197
+ if predicate:
198
+ q += " AND predicate = ?"
199
+ args.append(predicate)
200
+ rows = self._conn.execute(q, args).fetchall()
201
+ return [GraphFact(id=r[0], subject=r[1], predicate=r[2], value=r[3], created_at=r[4]) for r in rows]
202
+
203
+ def add_short_term(self, content: str) -> str:
204
+ id_ = str(uuid.uuid4())
205
+ expires = (datetime.utcnow() + timedelta(hours=24)).isoformat()
206
+ self._conn.execute(
207
+ "INSERT INTO short_term (id, content, expires_at) VALUES (?, ?, ?)",
208
+ (id_, content, expires)
209
+ )
210
+ self._conn.commit()
211
+ return id_
212
+
213
+ def get_short_term(self) -> list[str]:
214
+ now = datetime.utcnow().isoformat()
215
+ rows = self._conn.execute(
216
+ "SELECT content FROM short_term WHERE expires_at > ?", (now,)
217
+ ).fetchall()
218
+ return [r[0] for r in rows]
219
+
220
+ def cleanup_short_term(self):
221
+ now = datetime.utcnow().isoformat()
222
+ self._conn.execute("DELETE FROM short_term WHERE expires_at <= ?", (now,))
223
+ self._conn.commit()
224
+
225
+ def close(self):
226
+ self._conn.close()
227
+
228
+ def export_all(self) -> dict:
229
+ """Export all data as dict (for backup / migration)."""
230
+ memory_rows = self._conn.execute(
231
+ "SELECT id, type, content, metadata, created_at, last_accessed, access_count, user_id FROM memory"
232
+ ).fetchall()
233
+ graph_rows = self._conn.execute(
234
+ "SELECT id, subject, predicate, value, created_at FROM graph"
235
+ ).fetchall()
236
+ return {
237
+ "memory": [{
238
+ "id": r[0], "type": r[1], "content": r[2],
239
+ "metadata": json.loads(r[3]), "created_at": r[4],
240
+ "last_accessed": r[5], "access_count": r[6], "user_id": r[7]
241
+ } for r in memory_rows],
242
+ "graph": [dict(zip(["id","subject","predicate","value","created_at"], r)) for r in graph_rows],
243
+ "exported_at": datetime.utcnow().isoformat(),
244
+ }
245
+
246
+
247
+ # ─── Session Memory (in-process, ephemeral) ────────────────────────────────────
248
+
249
+ class SessionMemory:
250
+ """In-memory session context β€” current conversation window."""
251
+
252
+ def __init__(self, max_turns: int = 20):
253
+ self.max_turns = max_turns
254
+ self.turns: list[dict] = []
255
+
256
+ def add_turn(self, role: str, content: str, metadata: dict | None = None):
257
+ self.turns.append({
258
+ "role": role,
259
+ "content": content,
260
+ "metadata": metadata or {},
261
+ "ts": datetime.utcnow().isoformat(),
262
+ })
263
+ if len(self.turns) > self.max_turns:
264
+ self.turns = self.turns[-self.max_turns:]
265
+
266
+ def get_history(self, last_n: int | None = None) -> list[dict]:
267
+ if last_n is None:
268
+ return self.turns.copy()
269
+ return self.turns[-last_n:]
270
+
271
+ def get_context_summary(self) -> str:
272
+ """One-line summary of session so far."""
273
+ if not self.turns:
274
+ return ""
275
+ parts = [f"[{t['role']}]: {t['content'][:80]}" for t in self.turns[-5:]]
276
+ return " | ".join(parts)
277
+
278
+ def clear(self):
279
+ self.turns = []
280
+
281
+
282
+ # ─── Knowledge Graph ──────────────────────────────────────────────────────────
283
+
284
+ class KnowledgeGraph:
285
+ """Structured fact extraction from user prompts."""
286
+
287
+ def __init__(self, storage: Storage):
288
+ self.storage = storage
289
+
290
+ def add_fact(self, subject: str, predicate: str, value: str):
291
+ fact = GraphFact(
292
+ id=str(uuid.uuid4()),
293
+ subject=subject.strip(),
294
+ predicate=predicate.strip(),
295
+ value=value.strip(),
296
+ )
297
+ self.storage.add_graph(fact)
298
+ return fact
299
+
300
+ def add_facts_from_text(self, text: str):
301
+ """Simple rule-based extraction of (subject, predicate, value) triplets.
302
+ Looks for patterns like:
303
+ - "I am a X" β†’ (User, type, X)
304
+ - "I work at X" β†’ (User, works_at, X)
305
+ - "My goal is X" β†’ (User, goal, X)
306
+ - "We are building X" β†’ (Team, building, X)
307
+ """
308
+ text_lower = text.lower()
309
+ triples = []
310
+
311
+ # "I am a X" β†’ user type
312
+ import re
313
+ m = re.search(r"\bi\s+am\s+(?:a\s+)?([^\.]+)", text_lower)
314
+ if m:
315
+ triples.append(("User", "is_a", m.group(1).strip()))
316
+
317
+ # "I work at X"
318
+ m = re.search(r"\bi\s+work\s+at\s+([^\.]+)", text_lower)
319
+ if m:
320
+ triples.append(("User", "works_at", m.group(1).strip()))
321
+
322
+ # "I run X"
323
+ m = re.search(r"\bi\s+run\s+(?:a\s+)?([^\.]+)", text_lower)
324
+ if m:
325
+ triples.append(("User", "runs", m.group(1).strip()))
326
+
327
+ # "My goal is X"
328
+ m = re.search(r"\bmy\s+goal\s+(?:is|was)\s+([^\.]+)", text_lower)
329
+ if m:
330
+ triples.append(("User", "goal", m.group(1).strip()))
331
+
332
+ # "We are building X"
333
+ m = re.search(r"\bwe(?:'re|\s+are)\s+building\s+([^\.]+)", text_lower)
334
+ if m:
335
+ triples.append(("Team", "building", m.group(1).strip()))
336
+
337
+ # "Q3 goal: X"
338
+ m = re.search(r"\bq\d+\s+goal[^\w]*([^\.]+)", text_lower)
339
+ if m:
340
+ triples.append(("Team", "goal", m.group(1).strip()))
341
+
342
+ # "Team: X" or "team is X"
343
+ m = re.search(r"\bteam\s+(?:is\s+)?([^\.]+)", text_lower)
344
+ if m:
345
+ triples.append(("Team", "description", m.group(1).strip()))
346
+
347
+ for subj, pred, val in triples:
348
+ self.add_fact(subj, pred, val)
349
+
350
+ def query(self, subject: str | None = None) -> list[GraphFact]:
351
+ return self.storage.query_graph(subject=subject)
352
+
353
+ def get_all_as_text(self) -> str:
354
+ facts = self.storage.query_graph()
355
+ lines = [f"{f.subject} β€” {f.predicate}: {f.value}" for f in facts]
356
+ return "\n".join(lines) if lines else ""
357
+
358
+
359
+ # ─── Main Context Class ────────────────────────────────────────────────────────
360
+
361
+ class Context:
362
+ """
363
+ Stackme β€” Your context brain.
364
+
365
+ Three-tier memory + knowledge graph, all stored locally.
366
+
367
+ Usage:
368
+ from stackme import Context
369
+ ctx = Context()
370
+
371
+ ctx.add_fact("I run a fintech startup")
372
+ ctx.add_fact("Q3 goal: 10K paying customers")
373
+ ctx.add_prompt("User asked ChatGPT about Q3 pricing strategy")
374
+
375
+ context = ctx.get_relevant("What should we price at?")
376
+ # β†’ "I run a fintech startup | Q3 goal: 10K customers"
377
+
378
+ ctx.add_user_message("I'm building a B2B SaaS, targeting fintech")
379
+ # β†’ auto-extracts facts: (User, is_a, B2B SaaS), (User, targets, fintech)
380
+ """
381
+
382
+ def __init__(self, user_id: str = "default"):
383
+ self.user_id = user_id
384
+ self.storage = Storage()
385
+ self.session = SessionMemory()
386
+ self.kg = KnowledgeGraph(self.storage)
387
+
388
+ # ── Core API ──
389
+
390
+ def add_fact(self, content: str, metadata: dict | None = None) -> str:
391
+ """Add a structured fact to long-term memory."""
392
+ item = MemoryItem(
393
+ id=str(uuid.uuid4()),
394
+ type="fact",
395
+ content=content.strip(),
396
+ metadata=metadata or {},
397
+ user_id=self.user_id,
398
+ )
399
+ self.storage.add(item)
400
+ # Try to extract structured facts from natural language
401
+ self.kg.add_facts_from_text(content)
402
+ return item.id
403
+
404
+ def add_prompt(self, content: str, metadata: dict | None = None) -> str:
405
+ """Store a user prompt / message β€” builds context over time."""
406
+ item = MemoryItem(
407
+ id=str(uuid.uuid4()),
408
+ type="prompt",
409
+ content=content.strip(),
410
+ metadata=metadata or {"source": "user_prompt"},
411
+ user_id=self.user_id,
412
+ )
413
+ self.storage.add(item)
414
+ self.kg.add_facts_from_text(content)
415
+ return item.id
416
+
417
+ def add_context(self, content: str, metadata: dict | None = None) -> str:
418
+ """Store a context note (result, observation, etc)."""
419
+ item = MemoryItem(
420
+ id=str(uuid.uuid4()),
421
+ type="context",
422
+ content=content.strip(),
423
+ metadata=metadata or {},
424
+ user_id=self.user_id,
425
+ )
426
+ self.storage.add(item)
427
+ return item.id
428
+
429
+ def add_user_message(self, text: str) -> str:
430
+ """Add a user message β€” stores as prompt AND extracts facts."""
431
+ item_id = self.add_prompt(text)
432
+ self.session.add_turn("user", text)
433
+ return item_id
434
+
435
+ def add_ai_message(self, text: str) -> str:
436
+ """Add an AI response β€” stored as context."""
437
+ item_id = self.add_context(text, metadata={"source": "ai_response"})
438
+ self.session.add_turn("assistant", text)
439
+ return item_id
440
+
441
+ def get_relevant(self, query: str, top_k: int = 5) -> str:
442
+ """Retrieve most relevant context for a query, as a readable string."""
443
+ items = self.storage.search(query, top_k=top_k, user_id=self.user_id)
444
+ for item in items:
445
+ self.storage.update_access(item.id)
446
+
447
+ if not items:
448
+ return ""
449
+
450
+ # Build readable context string
451
+ fact_items = [i for i in items if i.type == "fact"]
452
+ prompt_items = [i for i in items if i.type == "prompt"]
453
+ context_items = [i for i in items if i.type == "context"]
454
+
455
+ lines = []
456
+ if fact_items:
457
+ lines.append("## Facts")
458
+ for item in fact_items[:3]:
459
+ lines.append(f"- {item.content}")
460
+ if prompt_items:
461
+ lines.append("## Past queries")
462
+ for item in prompt_items[:2]:
463
+ lines.append(f"- {item.content[:100]}")
464
+ if context_items:
465
+ lines.append("## Context")
466
+ for item in context_items[:2]:
467
+ lines.append(f"- {item.content[:100]}")
468
+
469
+ # Add graph facts if query matches subject
470
+ graph_text = self.kg.get_all_as_text()
471
+ if graph_text:
472
+ lines.append("## Knowledge Graph")
473
+ lines.append(graph_text)
474
+
475
+ return "\n".join(lines) if lines else ""
476
+
477
+ def search(self, query: str, top_k: int = 10) -> list[str]:
478
+ """Full-text search across all memories. Returns list of content strings."""
479
+ items = self.storage.search(query, top_k=top_k, user_id=self.user_id)
480
+ return [item.content for item in items]
481
+
482
+ def get_facts(self) -> list[str]:
483
+ """Get all stored facts."""
484
+ items = self.storage.search("", top_k=100, user_id=self.user_id)
485
+ return [i.content for i in items if i.type == "fact"]
486
+
487
+ def get_graph(self, subject: str | None = None) -> list[GraphFact]:
488
+ """Query the knowledge graph."""
489
+ return self.kg.query(subject=subject)
490
+
491
+ # ── Session ──
492
+
493
+ def add_session_turn(self, role: str, content: str):
494
+ """Add a turn to in-session memory."""
495
+ self.session.add_turn(role, content)
496
+
497
+ def get_session_history(self, last_n: int | None = None) -> list[dict]:
498
+ """Get session conversation history."""
499
+ return self.session.get_history(last_n)
500
+
501
+ def clear_session(self):
502
+ """Clear in-session memory only (long-term memory preserved)."""
503
+ self.session.clear()
504
+
505
+ # ── Utility ──
506
+
507
+ def export(self) -> dict:
508
+ """Export all memory data as a JSON-serializable dict."""
509
+ return self.storage.export_all()
510
+
511
+ def count(self) -> int:
512
+ """Total memory items stored."""
513
+ row = self.storage._conn.execute(
514
+ "SELECT COUNT(*) FROM memory WHERE user_id = ?", (self.user_id,)
515
+ ).fetchone()
516
+ return row[0] if row else 0
517
+
518
+ def clear_all(self):
519
+ """Wipe ALL memory β€” use with caution."""
520
+ self.storage._conn.execute(
521
+ "DELETE FROM memory WHERE user_id = ?", (self.user_id,)
522
+ )
523
+ self.storage._conn.execute("DELETE FROM graph")
524
+ self.storage._conn.execute("DELETE FROM short_term")
525
+ self.storage._conn.commit()
526
+ self.session.clear()