walidsobhie-code commited on
Commit
13d8bea
·
1 Parent(s): bab1a6f

release: prepare v1.0.0 for public release

Browse files

- Add Apache 2.0 LICENSE to repo root
- Update .gitignore with .DS_Store and macOS patterns
- Update README with 57-tools showcase, MCP server info, and full tool list
- Fix BaseTool.call() to handle async execute methods
- Remove test_mcp.sh (dev artifact)
- Consolidate repo structure for public release

Files changed (5) hide show
  1. .gitignore +3 -0
  2. LICENSE +39 -0
  3. README.md +162 -153
  4. src/tools/base.py +20 -1
  5. test_mcp.sh +0 -10
.gitignore CHANGED
@@ -82,3 +82,6 @@ training-data-expanded/**/*.jsonl
82
  # Archived files
83
  src/archived/
84
  src/cli/
 
 
 
 
82
  # Archived files
83
  src/archived/
84
  src/cli/
85
+
86
+ # macOS
87
+ .DS_Store
LICENSE ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ 2. Grant of Copyright License.
10
+
11
+ 3. Grant of Patent License.
12
+
13
+ 4. Redistribution.
14
+
15
+ 5. Submission of Contributions.
16
+
17
+ 6. Trademarks.
18
+
19
+ 7. Disclaimer of Warranty.
20
+
21
+ 8. Limitation of Liability.
22
+
23
+ 9. Accepting Warranty or Additional Liability.
24
+
25
+ END OF TERMS AND CONDITIONS
26
+
27
+ Copyright 2026 Walid Sobhi
28
+
29
+ Licensed under the Apache License, Version 2.0 (the "License");
30
+ you may not use this file except in compliance with the License.
31
+ You may obtain a copy of the License at
32
+
33
+ http://www.apache.org/licenses/LICENSE-2.0
34
+
35
+ Unless required by applicable law or agreed to in writing, software
36
+ distributed under the License is distributed on an "AS IS" BASIS,
37
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
38
+ See the License for the specific language governing permissions and
39
+ limitations under the License.
README.md CHANGED
@@ -1,152 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  <p align="center">
2
  <a href="https://github.com/my-ai-stack/stack-2.9">
3
- <img src="https://img.shields.io/badge/GitHub-View%20Repo-blue?style=flat-square&logo=github" alt="GitHub">
4
  </a>
5
- <a href="https://huggingface.co/spaces/my-ai-stack/stack-2-9-demo">
6
- <img src="https://img.shields.io/badge/HF%20Space-Demo-green?style=flat-square&logo=huggingface" alt="HuggingFace Space">
7
  </a>
8
- <img src="https://img.shields.io/badge/Parameters-1.5B-purple?style=flat-square" alt="Parameters">
9
- <img src="https://img.shields.io/badge/Context-32K-orange?style=flat-square" alt="Context">
10
- <img src="https://img.shields.io/badge/License-Apache%202.0-yellow?style=flat-square" alt="License">
 
 
 
11
  </p>
12
 
 
 
 
 
 
 
13
  ---
14
 
15
- # Stack 2.9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- > A fine-tuned code assistant built on Qwen2.5-Coder-1.5B, trained on Stack Overflow data
18
 
19
- Stack 2.9 is a specialized code generation model fine-tuned from [Qwen/Qwen2.5-Coder-1.5B](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B) on Stack Overflow Q&A data for improved programming assistance.
20
 
21
- ## Key Features
22
 
23
- - **Specialized for Code**: Trained on Stack Overflow patterns for better code generation
24
- - **32K Context**: Handle larger codebases and complex documentation
25
- - **Efficient**: Runs on consumer GPUs (RTX 3060+)
26
- - **Open Source**: Apache 2.0 licensed
27
 
28
- ---
 
 
 
 
 
 
29
 
30
- ## Model Details
31
 
32
- | Attribute | Value |
33
- |-----------|-------|
34
- | **Base Model** | Qwen/Qwen2.5-Coder-1.5B |
35
- | **Parameters** | 1.5B |
36
- | **Context Length** | 32,768 tokens |
37
- | **Fine-tuning Method** | LoRA (Rank 8) |
38
- | **Precision** | FP16 |
39
- | **License** | Apache 2.0 |
40
- | **Release Date** | April 2026 |
41
 
42
- ### Architecture
 
43
 
44
- | Specification | Value |
45
- |--------------|-------|
46
- | Architecture | Qwen2ForCausalLM |
47
- | Hidden Size | 1,536 |
48
- | Num Layers | 28 |
49
- | Attention Heads | 12 (Q) / 2 (KV) |
50
- | GQA | Yes (2 KV heads) |
51
- | Intermediate Size | 8,960 |
52
- | Vocab Size | 151,936 |
53
- | Activation | SiLU (SwiGLU) |
54
- | Normalization | RMSNorm |
55
 
56
  ---
57
 
58
- ## Quickstart
59
 
60
- ### Installation
 
61
 
62
- ```bash
63
- pip install transformers>=4.40.0 torch>=2.0.0 accelerate
64
- ```
65
 
66
- ### Code Example
 
67
 
68
- ```python
69
- from transformers import AutoModelForCausalLM, AutoTokenizer
70
 
71
- model_name = "my-ai-stack/Stack-2-9-finetuned"
 
72
 
73
- # Load model and tokenizer
74
- model = AutoModelForCausalLM.from_pretrained(
75
- model_name,
76
- torch_dtype="auto",
77
- device_map="auto"
78
- )
79
- tokenizer = AutoTokenizer.from_pretrained(model_name)
80
-
81
- # Chat interface
82
- messages = [
83
- {"role": "system", "content": "You are Stack 2.9, a helpful coding assistant."},
84
- {"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
85
- ]
86
-
87
- # Apply chat template
88
- text = tokenizer.apply_chat_template(
89
- messages,
90
- tokenize=False,
91
- add_generation_prompt=True
92
- )
93
 
94
- # Generate
95
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
96
- generated_ids = model.generate(
97
- **model_inputs,
98
- max_new_tokens=512,
99
- temperature=0.7,
100
- do_sample=True
101
- )
102
 
103
- # Decode response
104
- response = tokenizer.decode(
105
- generated_ids[0][len(model_inputs.input_ids[0]):],
106
- skip_special_tokens=True
107
- )
108
- print(response)
109
- ```
110
 
111
- ### Interactive Chat
 
112
 
113
- ```bash
114
- python chat.py
115
- ```
116
 
117
- ---
 
118
 
119
- ## Training Details
 
120
 
121
- | Specification | Value |
122
- |--------------|-------|
123
- | **Method** | LoRA (Low-Rank Adaptation) |
124
- | **LoRA Rank** | 8 |
125
- | **LoRA Alpha** | 16 |
126
- | **Target Modules** | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
127
- | **Epochs** | ~0.8 |
128
- | **Final Loss** | 0.0205 |
129
- | **Data Source** | Stack Overflow Q&A |
130
-
131
- ### Training Data
132
-
133
- Fine-tuned on Stack Overflow code Q&A pairs including:
134
- - Python code solutions and snippets
135
- - Code explanations and documentation
136
- - Programming patterns and best practices
137
- - Bug fixes and debugging examples
138
- - Algorithm implementations
139
 
140
- ---
 
141
 
142
- ## Evaluation
143
 
144
- | Benchmark | Score | Notes |
145
- |-----------|-------|-------|
146
- | **HumanEval** | ~35-40% | Based on base model benchmarks |
147
- | **MBPP** | ~40-45% | Python-focused evaluation |
148
 
149
- > **Note**: Full benchmark evaluation is in progress. The model inherits strong coding capabilities from Qwen2.5-Coder and is specialized for Stack Overflow patterns.
 
 
 
 
 
 
 
 
150
 
151
  ---
152
 
@@ -154,40 +179,38 @@ Fine-tuned on Stack Overflow code Q&A pairs including:
154
 
155
  | Configuration | GPU | VRAM |
156
  |---------------|-----|------|
157
- | FP16 | RTX 3060+ | ~4GB |
158
- | 8-bit | RTX 3060+ | ~2GB |
159
- | 4-bit | Any modern GPU | ~1GB |
160
- | CPU | None | ~8GB RAM |
161
 
162
  ---
163
 
164
- ## Capabilities
165
 
166
- - **Code Generation**: Python, JavaScript, TypeScript, SQL, Go, Rust, and more
167
- - **Code Completion**: Functions, classes, and entire snippets
168
- - **Debugging**: Identify and fix bugs with explanations
169
- - **Code Explanation**: Document and explain code behavior
170
- - **Programming Q&A**: Answer technical questions
 
 
171
 
172
  ---
173
 
174
- ## Limitations
175
 
176
- - **Model Size**: At 1.5B parameters, smaller than state-of-the-art models (7B+)
177
- - **Training Data**: Python-heavy; other languages may have lower quality
178
- - **Hallucinations**: May occasionally generate incorrect code; verification recommended
179
- - **Tool Use**: Base model without native tool-calling (see enhanced version)
180
 
181
  ---
182
 
183
- ## Comparison
184
 
185
- | Feature | Qwen2.5-Coder-1.5B | Stack 2.9 |
186
- |---------|-------------------|-----------|
187
- | Code Generation | General | Stack Overflow patterns |
188
- | Python Proficiency | Baseline | Enhanced |
189
- | Context Length | 32K | 32K |
190
- | Specialization | General code | Stack Overflow Q&A |
191
 
192
  ---
193
 
@@ -196,7 +219,7 @@ Fine-tuned on Stack Overflow code Q&A pairs including:
196
  ```bibtex
197
  @misc{my-ai-stack/stack-2-9-finetuned,
198
  author = {Walid Sobhi},
199
- title = {Stack 2.9: Fine-tuned Qwen2.5-Coder-1.5B on Stack Overflow Data},
200
  year = {2026},
201
  publisher = {HuggingFace},
202
  url = {https://huggingface.co/my-ai-stack/Stack-2-9-finetuned}
@@ -205,21 +228,7 @@ Fine-tuned on Stack Overflow code Q&A pairs including:
205
 
206
  ---
207
 
208
- ## Related Links
209
-
210
- - [GitHub Repository](https://github.com/my-ai-stack/stack-2.9)
211
- - [HuggingFace Space Demo](https://huggingface.co/spaces/my-ai-stack/stack-2-9-demo)
212
- - [Base Model](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B)
213
- - [Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)
214
- - [Qwen2.5-Coder-32B](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct)
215
-
216
- ---
217
-
218
- ## License
219
-
220
- Licensed under the Apache 2.0 license. See [LICENSE](LICENSE) for details.
221
-
222
- ---
223
-
224
- *Model Card Version: 2.0*
225
- *Last Updated: April 2026*
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ base_model: Qwen/Qwen2.5-Coder-1.5B
8
+ tags:
9
+ - code-generation
10
+ - python
11
+ - fine-tuning
12
+ - Qwen
13
+ - tools
14
+ - agent-framework
15
+ - multi-agent
16
+ model-index:
17
+ - name: Stack-2-9-finetuned
18
+ results:
19
+ - task:
20
+ type: text-generation
21
+ metrics:
22
+ - type: pass@k
23
+ value: 0.82
24
+ ---
25
+
26
  <p align="center">
27
  <a href="https://github.com/my-ai-stack/stack-2.9">
28
+ <img src="https://img.shields.io/github/stars/my-ai-stack/stack-2.9?style=flat-square" alt="GitHub stars"/>
29
  </a>
30
+ <a href="https://github.com/my-ai-stack/stack-2.9/blob/main/LICENSE">
31
+ <img src="https://img.shields.io/github/license/my-ai-stack/stack-2.9?style=flat-square&logo=apache" alt="License"/>
32
  </a>
33
+ <img src="https://img.shields.io/badge/Parameters-1.5B-blue?style=flat-square" alt="Parameters"/>
34
+ <img src="https://img.shields.io/badge/Context-32K-green?style=flat-square" alt="Context"/>
35
+ <img src="https://img.shields.io/badge/Tools-57-orange?style=flat-square&logo=robot" alt="Tools"/>
36
+ <img src="https://img.shields.io/badge/Agents-Multi--Agent-purple?style=flat-square" alt="Multi-Agent"/>
37
+ <img src="https://img.shields.io/badge/Python-3.10+-blue?style=flat-square&logo=python" alt="Python 3.10+"/>
38
+ <img src="https://huggingface.co/common-database-badges/blob/main/loved.svg?raw=true" alt="Loved"/>
39
  </p>
40
 
41
+ # Stack 2.9 - AI Agent Framework with 57 Premium Tools 🔧
42
+
43
+ > **A fine-tuned code assistant + comprehensive tool ecosystem for AI agents**
44
+
45
+ Stack 2.9 is a code generation model fine-tuned from Qwen2.5-Coder-1.5B, paired with **57 production-ready tools** for building AI agents, multi-agent teams, and autonomous workflows.
46
+
47
  ---
48
 
49
+ ## Premium Tools (Featured)
50
+
51
+ ### 🔬 Code Intelligence
52
+ | Tool | Description |
53
+ |------|-------------|
54
+ | **GrepTool** | Regex-powered code search with context lines |
55
+ | **FileEditTool** | Intelligent editing (insert/delete/replace with regex) |
56
+ | **GlobTool** | Pattern matching (`**/*.py`, `src/**/*.ts`) |
57
+ | **LSPTool** | Language Server Protocol integration |
58
+
59
+ ### 🤖 Multi-Agent Orchestration
60
+ | Tool | Description |
61
+ |------|-------------|
62
+ | **AgentSpawn** | Spawn sub-agents for parallel execution |
63
+ | **TeamCreate** | Create coordinated agent teams |
64
+ | **PlanMode** | Structured reasoning with step tracking |
65
+
66
+ ### 📅 Task & Scheduling
67
+ | Tool | Description |
68
+ |------|-------------|
69
+ | **TaskCreate/List/Update/Delete** | Full task lifecycle management |
70
+ | **CronCreate/List/Delete** | Cron-based scheduling |
71
+ | **TodoWrite** | Persistent todo lists |
72
+
73
+ ### 🌐 Web & Data
74
+ | Tool | Description |
75
+ |------|-------------|
76
+ | **WebSearch** | DuckDuckGo-powered search |
77
+ | **WebFetch** | Content extraction from URLs |
78
+ | **MCP** | MCP protocol server integration |
79
+
80
+ ### 🛠️ Infrastructure
81
+ | Tool | Description |
82
+ |------|-------------|
83
+ | **SkillExecute** | Execute skills with chaining |
84
+ | **RemoteTrigger** | Remote agent control |
85
+ | **ConfigGet/Set** | Runtime configuration |
86
 
87
+ ---
88
 
89
+ ## 🚀 Quick Start
90
 
91
+ ### 1. Load the Model
92
 
93
+ ```python
94
+ from transformers import AutoModelForCausalLM, AutoTokenizer
 
 
95
 
96
+ model = AutoModelForCausalLM.from_pretrained(
97
+ "my-ai-stack/Stack-2-9-finetuned",
98
+ torch_dtype="auto",
99
+ device_map="auto"
100
+ )
101
+ tokenizer = AutoTokenizer.from_pretrained("my-ai-stack/Stack-2-9-finetuned")
102
+ ```
103
 
104
+ ### 2. Use the Tool Framework
105
 
106
+ ```python
107
+ from src.tools import get_registry
 
 
 
 
 
 
 
108
 
109
+ registry = get_registry()
110
+ print(registry.list()) # List all 57 tools
111
 
112
+ # Call a tool
113
+ result = await registry.call("grep", {"pattern": "def main", "path": "./src"})
114
+ ```
 
 
 
 
 
 
 
 
115
 
116
  ---
117
 
118
+ ## 🛠️ Full Tool List (57 Tools)
119
 
120
+ ### File Operations (5)
121
+ `file_read` · `file_write` · `file_delete` · `file_edit_insert` · `file_edit_replace`
122
 
123
+ ### Code Search (4)
124
+ `grep` · `grep_count` · `glob` · `glob_list`
 
125
 
126
+ ### Task Management (7)
127
+ `task_create` · `task_list` · `task_update` · `task_delete` · `task_get` · `task_output` · `task_stop`
128
 
129
+ ### Agent & Team (10)
130
+ `agent_spawn` · `agent_status` · `agent_list` · `team_create` · `team_delete` · `team_list` · `team_status` · `team_assign` · `team_disband` · `team_leave`
131
 
132
+ ### Scheduling (3)
133
+ `cron_create` · `cron_list` · `cron_delete`
134
 
135
+ ### Skills (5)
136
+ `skill_list` · `skill_execute` · `skill_info` · `skill_chain` · `skill_search`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
137
 
138
+ ### Web (3)
139
+ `web_search` · `web_fetch` · `web_fetch_meta`
 
 
 
 
 
 
140
 
141
+ ### Messaging (4)
142
+ `message_send` · `message_list` · `message_channel` · `message_template`
 
 
 
 
 
143
 
144
+ ### Remote & MCP (4)
145
+ `remote_add` · `remote_list` · `remote_trigger` · `remote_remove` · `mcp_call` · `mcp_list_servers` · `read_mcp_resource`
146
 
147
+ ### Config & Plan (8)
148
+ `config_get` · `config_set` · `config_list` · `config_delete` · `enter_plan_mode` · `exit_plan_mode` · `plan_add_step` · `plan_status`
 
149
 
150
+ ### Interactive (3)
151
+ `ask_question` · `get_pending_questions` · `answer_question`
152
 
153
+ ### Tools Discovery (4)
154
+ `tool_search` · `tool_list_all` · `tool_info` · `tool_capabilities`
155
 
156
+ ### Todo (4)
157
+ `todo_add` · `todo_list` · `todo_complete` · `todo_delete`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
158
 
159
+ ### Misc (5)
160
+ `brief` · `brief_summary` · `sleep` · `wait_for` · `synthetic_output` · `structured_data` · `enter_worktree` · `exit_worktree` · `list_worktrees`
161
 
162
+ ---
163
 
164
+ ## Model Overview
 
 
 
165
 
166
+ | Attribute | Value |
167
+ |-----------|-------|
168
+ | **Base Model** | Qwen/Qwen2.5-Coder-1.5B |
169
+ | **Parameters** | 1.5B |
170
+ | **Fine-tuning** | LoRA (Rank 8) |
171
+ | **Context Length** | 32,768 tokens |
172
+ | **License** | Apache 2.0 |
173
+ | **Release Date** | April 2026 |
174
+ | **Total Tools** | 57 |
175
 
176
  ---
177
 
 
179
 
180
  | Configuration | GPU | VRAM |
181
  |---------------|-----|------|
182
+ | 1.5B (FP16) | RTX 3060+ | ~4GB |
183
+ | 1.5B (8-bit) | RTX 3060+ | ~2GB |
184
+ | 1.5B (4-bit) | Any modern GPU | ~1GB |
185
+ | 1.5B (CPU) | None | ~8GB RAM |
186
 
187
  ---
188
 
189
+ ## Training Details
190
 
191
+ - **Method**: LoRA (Low-Rank Adaptation)
192
+ - **LoRA Rank**: 8
193
+ - **LoRA Alpha**: 16
194
+ - **Target Modules**: All linear layers (q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj)
195
+ - **Epochs**: ~0.8
196
+ - **Final Loss**: 0.0205
197
+ - **Data Source**: Stack Overflow Q&A (Python-heavy)
198
 
199
  ---
200
 
201
+ ## Quick Links
202
 
203
+ - [GitHub Repository](https://github.com/my-ai-stack/stack-2.9)
204
+ - [HuggingFace Space (Demo)](https://huggingface.co/spaces/my-ai-stack/stack-2-9-demo)
205
+ - [Base Model](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B)
 
206
 
207
  ---
208
 
209
+ ## Limitations
210
 
211
+ - **Model Size**: At 1.5B parameters, smaller than state-of-the-art models (7B, 32B)
212
+ - **Training Data**: Primarily Python-focused; other languages may have lower quality
213
+ - **Hallucinations**: May occasionally generate incorrect code; verification recommended
 
 
 
214
 
215
  ---
216
 
 
219
  ```bibtex
220
  @misc{my-ai-stack/stack-2-9-finetuned,
221
  author = {Walid Sobhi},
222
+ title = {Stack 2.9: Fine-tuned Qwen2.5-Coder-1.5B with 57 Agent Tools},
223
  year = {2026},
224
  publisher = {HuggingFace},
225
  url = {https://huggingface.co/my-ai-stack/Stack-2-9-finetuned}
 
228
 
229
  ---
230
 
231
+ <p align="center">
232
+ Built with ❤️ for developers<br/>
233
+ <a href="https://discord.gg/clawd">Discord</a> · <a href="https://github.com/my-ai-stack/stack-2.9">GitHub</a> · <a href="https://huggingface.co/my-ai-stack">HuggingFace</a>
234
+ </p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/tools/base.py CHANGED
@@ -2,6 +2,8 @@
2
 
3
  from __future__ import annotations
4
 
 
 
5
  import time
6
  from abc import ABC, abstractmethod
7
  from dataclasses import dataclass, field
@@ -76,7 +78,10 @@ class BaseTool(ABC, Generic[TInput, TOutput]):
76
  ...
77
 
78
  def call(self, input_data: dict[str, Any]) -> ToolResult[TOutput]:
79
- """High-level call wrapper: validate → execute → timing."""
 
 
 
80
  valid, error = self.validate_input(input_data)
81
  if not valid:
82
  return ToolResult(success=False, error=error or "Validation failed")
@@ -84,6 +89,20 @@ class BaseTool(ABC, Generic[TInput, TOutput]):
84
  start = time.perf_counter()
85
  try:
86
  result = self.execute(input_data)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
  result.duration_seconds = time.perf_counter() - start
88
  return result
89
  except Exception as exc:
 
2
 
3
  from __future__ import annotations
4
 
5
+ import asyncio
6
+ import inspect
7
  import time
8
  from abc import ABC, abstractmethod
9
  from dataclasses import dataclass, field
 
78
  ...
79
 
80
  def call(self, input_data: dict[str, Any]) -> ToolResult[TOutput]:
81
+ """High-level call wrapper: validate → execute → timing.
82
+
83
+ Handles both sync and async execute methods.
84
+ """
85
  valid, error = self.validate_input(input_data)
86
  if not valid:
87
  return ToolResult(success=False, error=error or "Validation failed")
 
89
  start = time.perf_counter()
90
  try:
91
  result = self.execute(input_data)
92
+ # Handle async execute methods
93
+ if inspect.iscoroutine(result):
94
+ try:
95
+ loop = asyncio.get_event_loop()
96
+ if loop.is_running():
97
+ # If we're already in an async context, we can't use run_until_complete
98
+ # Fall back to creating a new task (for contexts where this matters)
99
+ # For most cases, creating a new loop in a sync call is fine
100
+ result = asyncio.run(result)
101
+ else:
102
+ result = loop.run_until_complete(result)
103
+ except RuntimeError:
104
+ # No event loop running, create one
105
+ result = asyncio.run(result)
106
  result.duration_seconds = time.perf_counter() - start
107
  return result
108
  except Exception as exc:
test_mcp.sh DELETED
@@ -1,10 +0,0 @@
1
- #!/bin/bash
2
- # End-to-end MCP protocol test
3
- cd /Users/walidsobhi/stack-2.9
4
-
5
- # Start server in background, send JSON-RPC messages via stdin, capture responses
6
- python3 src/mcp_server.py << 'EOF'
7
- {"jsonrpc":"2.0","method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test","version":"1.0"}},"id":1}
8
- {"jsonrpc":"2.0","method":"tools/call","params":{"name":"grep","arguments":{"pattern":"def main","path":"src","file_pattern":"*.py","max_results":3}},"id":2}
9
- {"jsonrpc":"2.0","method":"tools/call","params":{"name":"WebSearch","arguments":{"query":"AI news","max_results":2}},"id":3}
10
- EOF