metadata_version string | name string | version string | summary string | description string | description_content_type string | author string | author_email string | maintainer string | maintainer_email string | license string | keywords string | classifiers list | platform list | home_page string | download_url string | requires_python string | requires list | provides list | obsoletes list | requires_dist list | provides_dist list | obsoletes_dist list | requires_external list | project_urls list | uploaded_via string | upload_time timestamp[us] | filename string | size int64 | path string | python_version string | packagetype string | comment_text string | has_signature bool | md5_digest string | sha256_digest string | blake2_256_digest string | license_expression string | license_files list | recent_7d_downloads int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.4 | pmtvs-matrix | 0.3.2 | Matrix analysis primitives (SVD, covariance, eigendecomposition) | # pmtvs-matrix
Matrix analysis primitives.
## Installation
```bash
pip install pmtvs-matrix
```
## Functions
### Core Matrix Operations
- `covariance_matrix(data)` - Covariance matrix
- `correlation_matrix(data)` - Correlation matrix
- `eigendecomposition(matrix)` - Eigenvalues and eigenvectors
- `svd_decomposition(matrix)` - Singular value decomposition
- `matrix_rank(matrix)` - Matrix rank
- `condition_number(matrix)` - Condition number
- `effective_rank(matrix)` - Shannon entropy-based rank
- `graph_laplacian(adjacency)` - Graph Laplacian
### Eigenvalue Geometry
- `effective_dimension(eigenvalues)` - Participation ratio / entropy dimension
- `participation_ratio(eigenvalues)` - Participation ratio
- `alignment_metric(eigenvalues)` - Distribution alignment (cosine / KL)
- `eigenvalue_spread(eigenvalues)` - Coefficient of variation
- `matrix_entropy(matrix)` - Shannon entropy of eigenvalues
- `geometric_mean_eigenvalue(eigenvalues)` - Geometric mean
- `explained_variance_ratio(eigenvalues)` - Per-component variance
- `cumulative_variance_ratio(eigenvalues)` - Cumulative variance
### Dynamic Mode Decomposition
- `dynamic_mode_decomposition(signals)` - Full DMD
- `dmd_frequencies(eigenvalues, dt)` - DMD frequencies in Hz
- `dmd_growth_rates(eigenvalues, dt)` - DMD growth/decay rates
### Matrix Information Theory
- `mutual_information_matrix(signals)` - Pairwise MI matrix
- `transfer_entropy_matrix(signals)` - Directed TE matrix
- `granger_matrix(signals)` - Granger causality F-stats and p-values
## Backend
Pure Python implementation.
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:57:10.612515 | pmtvs_matrix-0.3.2-py3-none-any.whl | 12,539 | e0/39/470107bbc5fcfb21b107e5ff8942e008f9a64141cc0507afb3249ab9075e/pmtvs_matrix-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 1d9c1f2bafbab4b7d8bad15010f87b9a | a74a18875fa3c1c6736ff992542e047ed8e8ff968671a74957a01d735414a186 | e039470107bbc5fcfb21b107e5ff8942e008f9a64141cc0507afb3249ab9075e | null | [
"LICENSE"
] | 87 |
2.4 | pmtvs-information | 0.3.2 | Information theory primitives | # pmtvs-information
Information theory primitives.
## Installation
```bash
pip install pmtvs-information
```
## Functions
### Core Information Measures
- `mutual_information(x, y)` - I(X;Y)
- `transfer_entropy(source, target)` - Information flow
- `conditional_entropy(x, y)` - H(X|Y)
- `joint_entropy(x, y)` - H(X,Y)
- `kl_divergence(p, q)` - Kullback-Leibler divergence
- `js_divergence(p, q)` - Jensen-Shannon divergence
- `information_gain(x, y)` - IG(X;Y)
### Entropy Variants
- `shannon_entropy(data)` - Shannon entropy H(X)
- `renyi_entropy(data, alpha)` - Renyi entropy of order alpha
- `tsallis_entropy(data, q)` - Tsallis non-extensive entropy
### Divergence Measures
- `cross_entropy(p, q)` - Cross entropy H(P, Q)
- `hellinger_distance(p, q)` - Hellinger distance
- `total_variation_distance(p, q)` - Total variation distance
### Multivariate Information
- `conditional_mutual_information(x, y, z)` - I(X;Y|Z)
- `multivariate_mutual_information(variables)` - Co-information
- `total_correlation(variables)` - Total correlation
- `interaction_information(variables)` - Interaction information
- `dual_total_correlation(variables)` - Dual total correlation
### Information Decomposition
- `partial_information_decomposition(s1, s2, target)` - PID
- `redundancy(sources, target)` - Redundant information
- `synergy(sources, target)` - Synergistic information
- `information_atoms(sources, target)` - All information atoms
### Causality
- `granger_causality(source, target)` - Granger causality test
- `convergent_cross_mapping(sig_a, sig_b)` - CCM for nonlinear causality
- `phase_coupling(signal1, signal2)` - Phase-locking value
## Backend
Pure Python implementation. Requires scipy >= 1.7.
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"scipy>=1.7",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:57:09.486034 | pmtvs_information-0.3.2-py3-none-any.whl | 13,038 | be/bb/db789fe1c212eb17b52920c4a1e16900ec03ab7d4671add4c4a598714d65/pmtvs_information-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | ef631eaffa51640184a3c09680e0ab74 | 34f67256000357c4c7f634b48e9d7daa3a84e17b3939c7a7ea1eb723ef5473b0 | bebbdb789fe1c212eb17b52920c4a1e16900ec03ab7d4671add4c4a598714d65 | null | [
"LICENSE"
] | 88 |
2.4 | pmtvs-dynamics | 0.3.2 | Dynamical systems analysis primitives (17 functions, pure Python) | # pmtvs-dynamics
Dynamical systems analysis primitives.
## Installation
```bash
pip install pmtvs-dynamics
```
## Functions
### Lyapunov Analysis
- `ftle(trajectory, dt, method)` - Finite-Time Lyapunov Exponent
- `largest_lyapunov_exponent(signal, dim, tau)` - Rosenstein method
- `lyapunov_spectrum(trajectory, dt, n_exponents)` - Full spectrum
- `lyapunov_rosenstein(signal, dimension, delay)` - Rosenstein with divergence curve
- `lyapunov_kantz(signal, dimension, delay)` - Kantz method with multi-epsilon
- `ftle_local_linearization(trajectory, time_horizon)` - FTLE via local Jacobian
- `ftle_direct_perturbation(signal, dimension)` - FTLE via perturbation
### Embedding Estimation
- `estimate_embedding_dim_cao(signal, max_dim, tau)` - Cao's method
- `estimate_tau_ami(signal, max_tau, n_bins)` - Average mutual information delay
### Recurrence Analysis
- `recurrence_matrix(trajectory, threshold)` - Recurrence plot
- `recurrence_rate(R)` - Fraction of recurrence points
- `determinism(R)` - Diagonal line structure
- `laminarity(R)` - Vertical line structure
- `trapping_time(R)` - Average vertical line length
- `entropy_recurrence(R)` - Line length entropy
- `max_diagonal_line(R)` - Longest diagonal line
- `divergence_rqa(R)` - Inverse of max diagonal line
- `determinism_from_signal(signal)` - DET directly from signal
- `rqa_metrics(signal)` - Full RQA metrics dictionary
### Attractor Analysis
- `correlation_dimension(trajectory)` - Grassberger-Procaccia
- `correlation_integral(embedded, r)` - Correlation integral at radius r
- `information_dimension(signal)` - Information dimension
- `attractor_reconstruction(signal, dim, tau)` - Delay embedding
- `kaplan_yorke_dimension(spectrum)` - From Lyapunov exponents
### Stability Analysis
- `fixed_point_detection(trajectory)` - Find stationary regions
- `stability_index(trajectory)` - Local stability measure
- `jacobian_eigenvalues(trajectory)` - Local Jacobian eigenvalues
- `bifurcation_indicator(signal)` - Detect bifurcations
- `phase_space_contraction(trajectory)` - Flow divergence
- `hilbert_stability(y)` - Instantaneous frequency stability
- `wavelet_stability(y)` - Wavelet-based stability
- `detect_collapse(effective_dim)` - Detect dimensional collapse
### Saddle Point Analysis
- `estimate_jacobian_local(trajectory, point_idx)` - Local Jacobian estimation
- `classify_jacobian_eigenvalues(jacobian)` - Eigenvalue classification
- `detect_saddle_points(trajectory)` - Find saddle points
- `compute_separatrix_distance(trajectory, saddle_indices)` - Distance to separatrix
- `compute_basin_stability(trajectory, saddle_score)` - Basin stability
### Sensitivity Analysis
- `compute_variable_sensitivity(trajectory)` - Per-variable sensitivity
- `compute_directional_sensitivity(trajectory, direction)` - Directional sensitivity
- `compute_sensitivity_evolution(sensitivity)` - Sensitivity over time
- `detect_sensitivity_transitions(sensitivity, rank)` - Regime transitions
- `compute_influence_matrix(trajectory)` - Variable influence matrix
### Domain Analysis
- `basin_stability(y)` - Basin stability measure
- `cycle_counting(y)` - Cycle statistics
- `local_outlier_factor(y)` - Local outlier factor
- `time_constant(y)` - Characteristic time constant
## Backend
Pure Python implementation (no Rust acceleration for this package).
## License
PolyForm Strict 1.0.0 with Additional Terms.
- **Students & individual researchers:** Free. Cite us.
- **Funded research labs (grants > $100K):** Academic Research License required. [Contact us](mailto:licensing@pmtvs.dev).
- **Commercial use:** Commercial License required. [Contact us](mailto:licensing@pmtvs.dev).
See [LICENSE](LICENSE) for full terms.
| text/markdown | pmtvs contributors | null | null | null | PolyForm-Strict-1.0.0 | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy>=1.20",
"pytest>=7.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/pmtvs/pmtvs",
"Repository, https://github.com/pmtvs/pmtvs"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:57:08.701219 | pmtvs_dynamics-0.3.2-py3-none-any.whl | 30,906 | 9d/50/0500911e36c59ef438d5a966297187bdea7f1139685fdedcbeadf7d91747/pmtvs_dynamics-0.3.2-py3-none-any.whl | py3 | bdist_wheel | null | false | 565f407b430e413132fe2bd0b2046332 | 26cc3c19cd27e90ec56749d0bc6e7f6a52ff79eeab0a8ffb990f96690dd467d9 | 9d500500911e36c59ef438d5a966297187bdea7f1139685fdedcbeadf7d91747 | null | [
"LICENSE"
] | 89 |
2.4 | engrams-mcp | 1.0.2 | A governance-aware, context-intelligent development platform built on MCP | <div align="center">
<img src="https://raw.githubusercontent.com/stevebrownlee/engrams/refs/heads/main/static/engram.sh.png" style="height:150px;" />
# Engrams
## Enhanced Memory & Knowledge Platform
</div>
[](LICENSE)
[](https://www.python.org/downloads/)
A governance-aware, context-intelligent development platform built on the Model Context Protocol (MCP). Engrams transforms how AI agents understand and work with your projects by providing structured memory, intelligent context retrieval, and visual knowledge exploration.
**Forked from** [GreatScottyMac/context-portal](https://github.com/GreatScottyMac/context-portal) v0.3.13
[Features](#features) • [Installation](#installation) • [Quick Start](#quick-start) • [Documentation](#documentation)
---
## What is Engrams?
Engrams is an **intelligent project memory system** that helps AI assistants deeply understand your software projects. Instead of relying on simple text files or scattered documentation, Engrams provides a structured, queryable knowledge graph.
### Stored Knowledge
| Type | Description |
|------|-------------|
| **Decisions** | Why you chose PostgreSQL over MongoDB, why you're using microservices |
| **Progress** | Current tasks, blockers, what's in flight |
| **Patterns** | Architectural patterns, coding conventions, system designs |
| **Context** | Project goals, current focus, team agreements |
| **Custom Data** | Glossaries, specifications, any structured project knowledge |
---
## Setup
### MCP Server Configuration
Engrams runs as a Model Context Protocol (MCP) server. Configure it in your MCP client's settings file (typically `mcp.json` or in your IDE's MCP configuration). The easiest way to use Engrams is via `uvx`, which automatically manages the Python environment:
```json
{
"mcpServers": {
"engrams": {
"command": "uvx",
"args": [
"--from",
"engrams-mcp",
"engrams-mcp",
"--mode",
"stdio",
"--log-level",
"INFO"
]
}
}
}
```
#### Configuration Options
| Option | Description | Default |
|--------|-------------|---------|
| `--mode` | Communication mode: `stdio` or `http` | `stdio` |
| `--log-level` | Logging verbosity: `DEBUG`, `INFO`, `WARNING`, `ERROR` | `INFO` |
| `--workspace_id` | Explicit workspace path (optional - auto-detected if omitted) | Auto-detected |
| `--port` | Port for HTTP mode | `8000` |
**Note:** Engrams automatically detects your workspace using project indicators (`.git`, `package.json`, `pyproject.toml`, etc.), so you typically don't need to specify `--workspace_id`.
#### IDE-Specific Setup
Add the MCP configuration to your IDE's settings:
- **Roo Code**: Settings → MCP Servers
- **Cline**: `.cline/cline_mcp_config.json`
- **Windsurf**: Cascade settings
- **Cursor**: Settings → MCP Servers
- **Claude Desktop**: `~/.config/Claude/claude_desktop_config.json` (macOS/Linux) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows)
---
## Features
### Structured Context Storage
Store your project knowledge in a structured SQLite database instead of scattered markdown files.
**Purpose**: Provide reliable, queryable storage for all project context with one database per workspace.
**How to use it:**
```
You: "Log this decision: We're using PostgreSQL for the primary database
because we need ACID guarantees and complex query support"
AI: Decision logged to Engrams:
Decision #23: Use PostgreSQL for primary database
Rationale: ACID guarantees, complex queries, mature ecosystem
Tags: database, architecture
```
**Later, when working on related features:**
```
You: "I'm implementing the user data persistence layer.
What decisions should guide this?"
AI: Found relevant decision:
Decision #23: PostgreSQL is our primary database
I'll implement using SQLAlchemy with PostgreSQL...
```
---
### Semantic Search & RAG
Vector embeddings enable semantic search - find relevant context by meaning, not just keywords.
**Purpose**: Enable Retrieval Augmented Generation (RAG) so AI agents can access precise, contextually relevant information.
**How to use it:**
```
You: "How should I handle caching for better performance?"
AI: Searching for relevant decisions...
Found:
• Decision #8: Use Redis for session caching
• Decision #15: Cache invalidation strategy (TTL-based)
• Decision #22: Database query optimization with indexes
Based on these decisions, I recommend implementing Redis
for session caching with a 24-hour TTL...
```
**What you get**: Your AI finds relevant context by understanding meaning, not just exact keyword matches. Ask about "performance optimization" and it finds decisions about caching, indexing, and query optimization.
---
### Team Governance
Two-layer hierarchy (team/individual) with conflict detection and amendment workflows.
**Purpose**: Enforce team standards while allowing individual developers flexibility. Prevent individual decisions from contradicting team-level architectural mandates.
**How to use it:**
```
You: "I want to use MongoDB for the analytics module"
AI: ⚠️ Governance Conflict Detected
Your proposal conflicts with Team Decision #5:
"Standardize on PostgreSQL for all data storage"
Scope: Core Architecture (Team-level)
This is a hard block. Would you like to propose an amendment?
```
**When you need an exception:**
```
You: "Propose an amendment to allow InfluxDB for time-series metrics"
AI: Amendment proposal created:
Amendment #2 to Decision #5
Proposal: Allow InfluxDB for time-series metrics data
Rationale: PostgreSQL isn't optimized for high-volume time-series
Status: Pending team review
Your tech lead will be notified.
```
---
### Codebase Bindings
Link Engrams entities to actual code files using glob patterns.
**Purpose**: Bridge the gap between "what we decided" and "where it lives in code". Give AI agents spatial awareness of the codebase.
**How to use it:**
```
You: "Bind the rate limiting decision to the middleware implementation
in src/api/middleware/rate_limiter.py"
AI: Binding created:
Decision #14 (Rate limiting) → src/api/middleware/rate_limiter.py
Relationship: implements
```
**When working on related code:**
```
You: "I'm adding a new API endpoint. What patterns should I follow?"
AI: Checking code context for src/api/endpoints/...
Found Decision #14: Rate limiting (1000 req/min)
Implementation: src/api/middleware/rate_limiter.py
I'll apply the rate limiting middleware to your new endpoint...
```
---
### Context Budgeting
Intelligent token-aware context selection with relevance scoring.
**Purpose**: Fit the most valuable context into your AI's token budget. No more dumping everything and hoping - get precisely ranked, budget-constrained results.
**How to use it:**
```
You: "I need to implement rate limiting for the API.
Give me the most relevant context within 4000 tokens"
AI: Retrieving relevant context (4000 token budget)...
Top-scored items:
✓ Decision #14: API rate limiting requirements (1000 req/min)
✓ Pattern #8: Redis-based token bucket implementation
✓ Code binding: src/api/middleware/rate_limiter.py
✓ Related decision: Caching strategy with Redis
Not included (lower relevance):
✗ Decision #22: Database indexing
✗ Pattern #3: Authentication flow
I'll implement rate limiting using the token bucket pattern...
```
---
### Project Onboarding
Progressive briefing system for getting up to speed on any project.
**Purpose**: Generate structured, progressive briefings instead of raw data dumps. Perfect for new team members, returning to a project after time away, or starting a fresh AI session.
**How to use it:**
```
You: "Give me a project briefing"
AI: PROJECT BRIEFING: TaskMaster API
Purpose: RESTful API for task management with team collaboration
Key Decisions:
• PostgreSQL for primary database (ACID guarantees)
• JWT tokens for stateless authentication
• Redis for session caching
• Microservices architecture with event sourcing
Current Status:
• User authentication: Complete
• Task CRUD: In progress
• Real-time notifications: Planned
Tech Stack: Python, FastAPI, PostgreSQL, Redis, Docker
```
**For deeper context:**
```
You: "Tell me more about the authentication system"
AI: AUTHENTICATION SYSTEM DETAILS
Decision #7: Use JWT tokens for stateless auth
Rationale: Scales better than sessions, works with microservices
Pattern #3: Token validation middleware
Implementation: src/auth/middleware.py
Related decisions:
• Decision #12: Token refresh strategy (7-day expiry)
• Decision #18: CORS policy for token endpoints
Code locations:
• src/auth/login.py - Token generation
• src/auth/middleware.py - Token validation
• src/auth/session.py - Session management
```
---
### Knowledge Dashboard
Browser-based visual explorer with optional local LLM chat.
**Purpose**: Explore your project knowledge visually without needing an AI agent or burning API tokens. Perfect for browsing decisions, visualizing the knowledge graph, and conversational exploration via local Ollama.
**How to use it:**
```bash
# Start the dashboard (read-only, localhost only)
engrams-dashboard --workspace /path/to/project
# With Ollama chat enabled
engrams-dashboard --workspace /path/to/project \
--enable-chat \
--ollama-model llama3.2
# Custom port
engrams-dashboard --workspace /path/to/project --port 9000
# Auto-detect workspace
cd /path/to/project
engrams-dashboard
```
**What you see**:
- **Overview Dashboard**: Project stats, activity timeline, staleness indicators
- **Decision Explorer**: Filterable table with full-text search, tag navigation
- **Pattern Library**: System patterns with linked decisions and code bindings
- **Task Tracker**: Hierarchical progress view with status filtering
- **Knowledge Graph**: Interactive D3.js visualization showing all entity relationships
- **Custom Data Browser**: Category-based navigation with JSON pretty-printing
- **Governance Dashboard**: Scope hierarchy, active rules, compliance overview
- **Ollama Chat Panel**: Ask questions about your project using local LLM (optional)
**Security**: Binds to `127.0.0.1` (localhost only) by default. Read-only access - cannot modify data.
---
### Knowledge Graph & Relationships
Explicitly link entities to build a queryable relationship network.
**Purpose**: Capture how decisions relate to patterns, how tasks track decisions, how features depend on each other.
**How to use it:**
```
You: "Link the rate limiting pattern to Decision #14.
The pattern implements the decision"
AI: Relationship created:
Pattern #8 (Rate limiting) implements Decision #14 (API protection)
Now when you review Decision #14, you'll see which patterns
implement it. When you work on the rate limiting pattern,
you'll see the decision it addresses.
```
**Exploring relationships:**
```
You: "Show me everything related to the authentication decision"
AI: Linked items for Decision #7 (JWT authentication):
Implements:
• Pattern #3: Token validation middleware
• Pattern #5: Token refresh strategy
Tracks:
• Progress #12: Implement JWT endpoints
• Progress #18: Add token refresh logic
Related Decisions:
• Decision #12: Token expiry (7 days)
• Decision #18: CORS policy
Code Bindings:
• src/auth/login.py
• src/auth/middleware.py
```
**Common relationship types**: `implements`, `related_to`, `tracks`, `blocks`, `clarifies`, `depends_on`, `supersedes`, `resolves`
---
### Batch Operations & Export/Import
Efficient bulk operations and markdown export for version control.
**Purpose**: Log multiple items in one call, export for backup/sharing, import from version-controlled markdown.
**How to use it:**
```
You: "Log these tech stack decisions all at once:
- FastAPI for REST API
- Pydantic for validation
- SQLAlchemy for ORM
- Alembic for migrations
- pytest for testing"
AI: Logged 5 decisions:
✓ Decision #24: Use FastAPI for REST API
✓ Decision #25: Use Pydantic for validation
✓ Decision #26: Use SQLAlchemy for ORM
✓ Decision #27: Use Alembic for migrations
✓ Decision #28: Use pytest for testing
All tagged with "tech_stack" for easy retrieval.
```
**Export for version control:**
```
You: "Export all project knowledge to markdown"
AI: Exporting Engrams database to markdown...
Created ./docs/engrams_export/ with:
• decisions.md (28 decisions)
• patterns.md (8 patterns)
• progress.md (15 tasks)
• glossary.md (12 terms)
• relationships.md (all links)
You can commit this to version control.
```
**Import after cloning:**
```
You: "Import the project knowledge from markdown"
AI: Importing from ./docs/engrams_export/...
Imported:
✓ 28 decisions
✓ 8 patterns
✓ 15 progress items
✓ 12 glossary terms
✓ All relationships
Your project memory is fully restored.
```
---
## Installation
### Prerequisites
- **Python 3.8+** ([Download](https://www.python.org/downloads/))
- **uv** (recommended) - Fast Python package manager ([Install](https://github.com/astral-sh/uv#installation))
### Recommended: Using `uvx`
The easiest way to use Engrams is via `uvx`, which handles environments automatically:
```json
{
"mcpServers": {
"engrams": {
"command": "uvx",
"args": [
"--from", "engrams-mcp",
"engrams-mcp",
"--mode", "stdio",
"--log-level", "INFO"
]
}
}
}
```
Add to your MCP client settings (e.g., Roo Code, Cline, Windsurf, Cursor).
**Note**: Most IDEs don't expand `${workspaceFolder}` for MCP servers. Engrams has automatic workspace detection, so you can omit `--workspace_id` at launch. The workspace is detected per-call using project indicators (.git, package.json, etc.).
### Developer Installation
For local development:
```bash
# Clone the repository
git clone https://github.com/yourusername/engrams.git
cd engrams
# Create virtual environment
uv venv
# Install dependencies
uv pip install -r requirements.txt
# Run in your IDE using local checkout
# See README "Installation for Developers" section for MCP config
```
---
## Quick Start
### 1. Configure Your MCP Client
Add Engrams to your MCP settings (see [Installation](#installation) section).
### 2. Add Custom Instructions
Copy the appropriate strategy file for your IDE:
- **Roo Code**: [`engrams-custom-instructions/roo_code_engrams_strategy`](engrams-custom-instructions/roo_code_engrams_strategy)
- **Cline**: [`engrams-custom-instructions/cline_engrams_strategy`](engrams-custom-instructions/cline_engrams_strategy)
- **Windsurf**: [`engrams-custom-instructions/cascade_engrams_strategy`](engrams-custom-instructions/cascade_engrams_strategy)
- **Generic**: [`engrams-custom-instructions/generic_engrams_strategy`](engrams-custom-instructions/generic_engrams_strategy)
Paste the entire content into your IDE's custom instructions field.
### 3. Bootstrap Your Project (Optional but Recommended)
Create [`projectBrief.md`](projectBrief.md) in your workspace root:
```markdown
# TaskMaster API
## Purpose
RESTful API for task management with team collaboration.
## Key Features
- User authentication (JWT)
- Task CRUD with assignments
- Real-time notifications
- Team workspaces
## Architecture
- Microservices pattern
- Event sourcing for task updates
- PostgreSQL for persistence
- Redis for caching
## Tech Stack
Python, FastAPI, PostgreSQL, Redis, Docker
```
On first initialization, your AI agent will offer to import this into Product Context.
### 4. Start Using Engrams
```
You: Initialize according to custom instructions
AI: [ENGRAMS_ACTIVE] Engrams initialized. Found projectBrief.md - imported to Product Context.
What would you like to work on?
You: Add JWT authentication to the API
AI: I'll help with that. Let me retrieve relevant context...
Found Decision #7: "Use JWT tokens for stateless auth"
Found Pattern #3: "Token validation middleware"
Based on existing decisions and patterns, I'll implement JWT auth
following the established middleware pattern...
[Implementation follows]
```
---
## Automatic Workspace Detection
Engrams can automatically detect your project root - no hardcoded paths needed.
**Detection strategy** (priority order):
1. **Strong indicators**: `.git`, `package.json`, `pyproject.toml`, `Cargo.toml`, `go.mod`, `pom.xml`
2. **Multiple general indicators**: ≥2 of (README, license, build configs)
3. **Existing Engrams workspace**: `engrams/` directory present
4. **Environment variables**: `VSCODE_WORKSPACE_FOLDER`, `ENGRAMS_WORKSPACE`
5. **Fallback**: Current working directory (with warning)
See [`UNIVERSAL_WORKSPACE_DETECTION.md`](UNIVERSAL_WORKSPACE_DETECTION.md) for full details.
---
## Available MCP Tools
Your AI assistant uses these tools automatically. You don't need to call them directly.
### Core Context
- `get_product_context`, `update_product_context` - Project goals, features, architecture
- `get_active_context`, `update_active_context` - Current focus, recent changes
### Decisions
- `log_decision`, `get_decisions`, `search_decisions_fts`, `delete_decision_by_id`
### Progress
- `log_progress`, `get_progress`, `update_progress`, `delete_progress_by_id`
### Patterns
- `log_system_pattern`, `get_system_patterns`, `delete_system_pattern_by_id`
### Custom Data
- `log_custom_data`, `get_custom_data`, `delete_custom_data`
- `search_custom_data_value_fts`, `search_project_glossary_fts`
### Relationships
- `link_engrams_items`, `get_linked_items`
### Governance (Feature 1)
- `create_scope`, `get_scopes`
- `log_governance_rule`, `get_governance_rules`
- `check_compliance`, `get_scope_amendments`, `review_amendment`
- `get_effective_context`
### Codebase Bindings (Feature 2)
- `bind_code_to_item`, `get_bindings_for_item`, `get_context_for_files`
- `verify_bindings`, `get_stale_bindings`, `suggest_bindings`, `unbind_code_from_item`
### Context Budgeting (Feature 3)
- `get_relevant_context`, `estimate_context_size`
- `get_context_budget_config`, `update_context_budget_config`
### Onboarding (Feature 4)
- `get_project_briefing`, `get_briefing_staleness`, `get_section_detail`
### Utilities
- `get_item_history`, `get_recent_activity_summary`, `get_engrams_schema`
- `export_engrams_to_markdown`, `import_markdown_to_engrams`
- `batch_log_items`
- `get_workspace_detection_info`
See full parameter details in the original README or use `get_engrams_schema()`.
---
## Documentation
- **[Deep Dive](engrams_deep_dive.md)** - Architecture and design details
- **[Workspace Detection](UNIVERSAL_WORKSPACE_DETECTION.md)** - Auto-detection behavior
- **[Update Guide](v0.2.4_UPDATE_GUIDE.md)** - Database migration instructions
- **[Contributing](CONTRIBUTING.md)** - How to contribute
- **[AGENTS.md](AGENTS.md)** - Implementation strategy for Features 1-5
- **[Custom Instructions](engrams-custom-instructions/)** - IDE-specific strategies
---
## Architecture
- **Language**: Python 3.8+
- **Framework**: FastAPI (MCP server)
- **Database**: SQLite (one per workspace)
- **Vector Store**: ChromaDB (semantic search)
- **Migrations**: Alembic (schema evolution)
- **Protocol**: Model Context Protocol (STDIO or HTTP)
```
src/engrams/
├── main.py # Entry point, CLI args
├── server.py # FastMCP server, tool registration
├── db/ # Database layer
│ ├── database.py # SQLite operations
│ ├── models.py # Pydantic models
│ └── migrations/ # Alembic migrations
├── handlers/ # MCP tool handlers
├── governance/ # Feature 1: Team governance
├── bindings/ # Feature 2: Codebase bindings
├── budgeting/ # Feature 3: Context budgeting
├── onboarding/ # Feature 4: Project briefings
└── dashboard/ # Feature 5: Visual explorer
```
---
## Contributing
We welcome contributions! Please see [`CONTRIBUTING.md`](CONTRIBUTING.md) for:
- Code of conduct
- Development setup
- Pull request process
- Testing requirements
---
## License
This project is licensed under the [Apache-2.0 License](LICENSE).
---
## Acknowledgments
- Forked from [GreatScottyMac/context-portal](https://github.com/GreatScottyMac/context-portal) v0.3.13
- Thanks to [@cipradu](https://github.com/cipradu) for integer-string coercion implementation
- Built on the [Model Context Protocol](https://modelcontextprotocol.io/)
---
## Support
- **Issues**: [GitHub Issues](https://github.com/yourusername/engrams/issues)
- **Discussions**: [GitHub Discussions](https://github.com/yourusername/engrams/discussions)
---
<div align="center">
**[⬆ Back to Top](#engrams)**
Built with care for better AI-assisted development
</div>
| text/markdown | null | Scott McLeod <contextportal@gmail.com>, Steve Brownlee <steve@stevebrownlee.com> | null | null | Apache-2.0 | mcp, context, engrams, knowledge-graph, rag | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"De... | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.120.0",
"uvicorn[standard]>=0.38.0",
"pydantic>=2.12.5",
"fastmcp>=2.13.3",
"mcp>=1.23.0",
"sentence-transformers>=3.3.1",
"chromadb>=1.3.5",
"alembic>=1.17.2",
"sqlalchemy>=2.0.0",
"authlib>=1.6.5",
"urllib3>=2.6.0",
"filelock>=3.16.2",
"flask>=3.0; extra == \"dashboard\"",
"h... | [] | [] | [] | [
"Homepage, https://engrams.sh",
"Bug Reports, https://github.com/stevebrownlee/engrams-mcp/issues",
"Source, https://github.com/stevebrownlee/engrams-mcp",
"Documentation, https://engrams.sh/docs"
] | twine/6.2.0 CPython/3.11.11 | 2026-02-21T05:57:01.592555 | engrams_mcp-1.0.2.tar.gz | 133,827 | 6e/8e/22236c8411f9806d69ef3ca84b89025dc2c70d59e6ee7d393723dbc22103/engrams_mcp-1.0.2.tar.gz | source | sdist | null | false | 2a3b9801776e441c8d87562da8892bae | 828969a72ac0c0a9d98fa6edc19c8c75cc6ba833838322be6d8fd0cf82728d86 | 6e8e22236c8411f9806d69ef3ca84b89025dc2c70d59e6ee7d393723dbc22103 | null | [
"LICENSE"
] | 140 |
2.4 | pyprj | 0.7.0 | An opinionated CLI tool to manage python projects. | # PyPrj
An opinionated CLI tool to manage python projects.
Makes use of:
- [VS Code](https://code.visualstudio.com/) as IDE.
- [uv](https://docs.astral.sh/uv/) as package manager.
- [pytest](https://docs.pytest.org/en/stable/#) as testing framework.
- [black](https://black.readthedocs.io/en/stable/#) to format python files.
- [Prettier](https://prettier.io/) to format markdown files.
- [Sphinx](https://www.sphinx-doc.org/en/master/#) framework to write
documentation.
- [MyST](https://myst-parser.readthedocs.io/en/latest/#) parser extension to
write sphinx docs with markdown.
- [Furo](https://github.com/pradyunsg/furo?tab=readme-ov-file#furo) as sphinx
theme.
- [Jupyter](https://jupyter.org/) to write jupyter notebooks that are converted
into markdown.
- [taskipy](https://pypi.org/project/taskipy/) to run automated tasks.
- [Read the Docs](https://about.readthedocs.com/) as pre-set option to host
documentation.
- [MIT](https://opensource.org/license/mit) as license.
## Installation
A good way to install CLI tools made with python is using
[`pipx`](https://pipx.pypa.io/stable/).
```sh
pipx install pyprj
```
With `pipx`, the tool is globally installed in an isolated environment.
## Usage
Look at the help messages from the CLI (using `--help`). Some of the messages
are bellow.
### Main command
```none
> pyprj --help
usage: pyprj [-h] [-v] {init,test,docs,build,version,publish} ...
A CLI to manage python projects with predefined tools.
options:
-h, --help Show this help message and exit.
-v, --version show program's version number and exit
subcommands:
{init,test,docs,build,version,publish}
init Create a new project for a python package.
test Run task 'test' inside the project.
docs Manage documentation of the project.
build Run task 'build' inside the project.
version Update or show project version.
publish Publish package to PyPI.
```
### `init` subcommand
```none
> pyprj init --help
usage: pyprj init [-h] [-n <name>] [-p <python-version>] [-b <black-line-length>]
Create a new project for a python package.
options:
-h, --help Show this help message and exit.
-n <name>, --name <name>
The name of the project. If `None`, use the current
directory's name.
Defaults to 'None'.
-p <python-version>, --python-version <python-version>
The Python interpreter version to use to determine the
minimum supported Python version.
Defaults to '3.12'.
-b <black-line-length>, --black-line-length <black-line-length>
Line length parameter to use with `black`.
Defaults to '128'.
```
### `test` subcommand
```none
> pyprj test --help
usage: pyprj test [-h]
Run task 'test' inside the project.
This command only runs the task 'test' inside the project.
Tasks use the tool 'taskipy'. Currently are run with the tool 'uv'.
The task 'test' runs tests with 'pytest' in folder './tests'.
options:
-h, --help Show this help message and exit.
```
### `docs` subcommand
```none
> pyprj docs --help
usage: pyprj docs [-h] {init,nbex,nbmd,modm} ...
Manage documentation of the project.
If called without subcommands, runs the task 'docs' inside the project.
Tasks use the tool 'taskipy'. Currently are run with the tool 'uv'.
The task 'docs' makes docs with 'sphinx' in folder './doc/sphinx'.
options:
-h, --help Show this help message and exit.
subcommands:
{init,nbex,nbmd,modm}
init Initialize documentation folder with packages.
nbex Process jupyter (nb) files to generate example files of code.
nbmd Process jupyter (nb) files to generate markdown (md) files.
modm Process documentation in modules.
```
#### `docs/init` subcommand
```none
> pyprj docs init --help
usage: pyprj docs init [-h]
Initialize documentation folder with packages.
options:
-h, --help Show this help message and exit.
```
#### `docs/nbmd` subcommand
```none
> pyprj docs nbmd --help
usage: pyprj docs nbmd [-h] [-k {tutorial,function,class}] [-n] [-r <pattern>] [-d] [filepath ...]
Process jupyter (nb) files to generate markdown (md) files.
positional arguments:
filepath The filepath or filepaths of jupyter notebook (`.ipynb`) to convert
to markdown. If `None` (default), process all notebook files from
the current directory.
options:
-h, --help Show this help message and exit.
-k {tutorial,function,class}, --kind {tutorial,function,class}
The kind of the notebook files documentation to convert.
Defaults to 'tutorial'.
-n, --no-prettier Whether to not pos-process the generate .md files with
'prettier', if 'prettier' is available.
Defaults to 'False'.
-r <pattern>, --remove-pattern-shell-files <pattern>
Pattern to remove in shell command line cells. Aiming to
remove example command line folders from path.
Defaults to 'examples/'.
-d, --dont-run-notebooks-before
Whether to not run the jupyter notebooks before
processing.
Defaults to 'False'.
```
#### `docs/nbex` subcommand
```none
> pyprj docs nbex --help
usage: pyprj docs nbex [-h] [-c] [-d <dest-directory>] [-o <output-suffix>] [filepath ...]
Process jupyter (nb) files to generate example files of code.
Create files from the cells starting with '%%python'.
positional arguments:
filepath The filepath or filepaths of jupyter notebook (`.ipynb`) to
generate examples. If `None` (default), process all notebook
files from the current directory.
options:
-h, --help Show this help message and exit.
-c, --change-shell-cells
Whether to edit the following shell cells, after the
example cells.
Defaults to 'False'.
-d <dest-directory>, --dest-directory <dest-directory>
Directory of the resulting examples files.
Defaults to 'examples'.
-o <output-suffix>, --output-suffix <output-suffix>
If editing original notebook file
(`change_shell_cells=True`) add this
suffix to the resulting file. Used for debbuging
purposes, to not overwrite
the original file (which is done with the default
value).
Defaults to ''.
```
#### `docs/modm` subcommand
```none
> pyprj docs modm --help
usage: pyprj docs modm [-h] [filepath ...]
Process documentation in modules.
positional arguments:
filepath The filepath or filepaths of modules (.py) to process.
If `None` (default), process all python files from the current directory.
options:
-h, --help Show this help message and exit.
```
### `build` subcommand
```none
> pyprj build --help
usage: pyprj build [-h]
Run task 'build' inside the project.
This command only runs the task 'build' inside the project.
Tasks use the tool 'taskipy'. Currently are run with the tool 'uv'.
The task 'build' builds the package with 'uv' in root folder.
options:
-h, --help Show this help message and exit.
```
### `version` subcommand
```none
> pyprj version --help
usage: pyprj version [-h] [{major,minor,patch}]
Update or show project version.
If called without positional arguments, only show the project version.
positional arguments:
{major,minor,patch}
options:
-h, --help Show this help message and exit.
```
### `publish` subcommand
```none
> pyprj publish --help
usage: pyprj publish [-h]
Publish package to PyPI.
Uses token from file '.vscode/pyprj.json'
options:
-h, --help Show this help message and exit.
```
| text/markdown | Diogo Rossi | Diogo Rossi <rossi.diogo@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"clig>=0.6.3",
"taskipy>=1.14.1"
] | [] | [] | [] | [
"Documentation, https://github.com/diogo-rossi/pyprj/blob/main/README.md",
"Issues, https://github.com/diogo-rossi/pyprj/issues",
"Source, https://github.com/diogo-rossi/pyprj/"
] | uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T05:56:45.631490 | pyprj-0.7.0-py3-none-any.whl | 23,112 | 0a/db/0503873e48fc24effff32accdcfd00e643516e5f80b7b36472e836434a01/pyprj-0.7.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 32cb15a40e5319ccf1ec3c58f7f461e2 | 08713568f82a034c3a7b0addd27c428cff83ec751e1caeecfad69e3093a09688 | 0adb0503873e48fc24effff32accdcfd00e643516e5f80b7b36472e836434a01 | MIT | [] | 201 |
2.4 | hello-agents | 1.0.0 | 生产级多智能体框架 - 工具响应协议、上下文工程、会话持久化、子代理机制、乐观锁、熔断器、Skills知识外化等16项核心能力 | # HelloAgents
> 🤖 生产级多智能体框架 - 工具响应协议、上下文工程、会话持久化、子代理机制等16项核心能力
[](https://www.python.org/downloads/)
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
HelloAgents 是一个基于 OpenAI 原生 API 构建的生产级多智能体框架,集成了工具响应协议(ToolResponse)、上下文工程(HistoryManager/TokenCounter)、会话持久化(SessionStore)、子代理机制(TaskTool)、乐观锁(文件编辑)、熔断器(CircuitBreaker)、Skills 知识外化、TodoWrite 进度管理、DevLog 决策记录、流式输出(SSE)、异步生命周期、可观测性(TraceLogger)、日志系统(四种范式)、LLM/Agent 基类重构等 16 项核心能力,为构建复杂智能体应用提供完整的工程化支持。
## 📌 版本说明
> **重要提示**:本仓库目前维护两个版本
- **📚 学习版本(推荐初学者)**:[learn_version 分支](https://github.com/jjyaoao/HelloAgents/tree/learn_version)
与 [Datawhale Hello-Agents 教程](https://github.com/datawhalechina/hello-agents) 正文完全对应的稳定版本,适合跟随教程学习使用。
- **🚀 开发版本(当前分支)**:持续迭代中的最新代码(V1.0.0),包含新功能和改进,部分实现可能与教程内容存在差异。如需学习教程,请切换到 `learn_version` 分支。
- **📦 历史版本**:[Releases 页面](https://github.com/jjyaoao/HelloAgents/releases)
提供从 v0.1.1 到 v0.2.9 的所有版本,每个版本对应教程的特定章节,可根据学习进度选择对应版本。
## 🚀 快速开始
### 安装
```bash
pip install hello-agents
```
### 基本使用
```python
from hello_agents import ReActAgent, HelloAgentsLLM, ToolRegistry
from hello_agents.tools.builtin import ReadTool, WriteTool, TodoWriteTool
llm = HelloAgentsLLM()
registry = ToolRegistry()
registry.register_tool(ReadTool())
registry.register_tool(WriteTool())
registry.register_tool(TodoWriteTool())
agent = ReActAgent("assistant", llm, tool_registry=registry)
agent.run("分析项目结构并生成报告")
```
### 环境配置
创建 `.env` 文件:
```bash
LLM_MODEL_ID=your-model-name
LLM_API_KEY=your-api-key-here
LLM_BASE_URL=your-api-base-url
```
```python
# 自动检测provider
llm = HelloAgentsLLM() # 框架自动检测为modelscope
print(f"检测到的provider: {llm.provider}")
```
> 💡 **智能检测**: 框架会根据API密钥格式和Base URL自动选择合适的provider
### 支持的LLM提供商
框架基于 **3 种适配器** 支持所有主流 LLM 服务:
#### 1. OpenAI 兼容适配器(默认)
支持所有提供 OpenAI 兼容接口的服务:
| 提供商类型 | 示例服务 | 配置示例 |
| ------------ | -------------------------------------- | ------------------------------------ |
| **云端 API** | OpenAI、DeepSeek、Qwen、Kimi、智谱 GLM | `LLM_BASE_URL=api.deepseek.com` |
| **本地推理** | vLLM、Ollama、SGLang | `LLM_BASE_URL=http://localhost:8000` |
| **其他兼容** | 任何 OpenAI 格式接口 | `LLM_BASE_URL=your-endpoint` |
#### 2. Anthropic 适配器
| 提供商 | 检测条件 | 配置示例 |
| ---------- | ------------------------------- | ---------------------------------------- |
| **Claude** | `base_url` 包含 `anthropic.com` | `LLM_BASE_URL=https://api.anthropic.com` |
#### 3. Gemini 适配器
| 提供商 | 检测条件 | 配置示例 |
| ----------------- | -------------------------------------------------------- | -------------------------------------------------------- |
| **Google Gemini** | `base_url` 包含 `googleapis.com` 或 `generativelanguage` | `LLM_BASE_URL=https://generativelanguage.googleapis.com` |
> 💡 **自动适配**:框架根据 `base_url` 自动选择适配器,无需手动指定。
## 🏗️ 项目结构
```
hello-agents/
├── hello_agents/ # 主包
│ ├── core/ # 核心组件
│ │ ├── llm.py # LLM 基类与配置
│ │ ├── llm_adapters.py # 三种适配器(OpenAI/Anthropic/Gemini)
│ │ ├── agent.py # Agent 基类(Function Calling 架构)
│ │ ├── session_store.py # 会话持久化
│ │ ├── lifecycle.py # 异步生命周期
│ │ └── streaming.py # SSE 流式输出
│ ├── agents/ # Agent 实现
│ │ ├── simple_agent.py # SimpleAgent
│ │ ├── react_agent.py # ReActAgent
│ │ ├── reflection_agent.py # ReflectionAgent
│ │ └── plan_solve_agent.py # PlanAndSolveAgent
│ ├── tools/ # 工具系统
│ │ ├── registry.py # 工具注册表
│ │ ├── response.py # ToolResponse 协议
│ │ ├── circuit_breaker.py # 熔断器
│ │ ├── tool_filter.py # 工具过滤(子代理机制)
│ │ └── builtin/ # 内置工具
│ │ ├── file_tools.py # 文件工具(乐观锁)
│ │ ├── task_tool.py # 子代理工具
│ │ ├── todowrite_tool.py # 进度管理
│ │ ├── devlog_tool.py # 决策日志
│ │ └── skill_tool.py # Skills 知识外化
│ ├── context/ # 上下文工程
│ │ ├── history.py # HistoryManager
│ │ ├── token_counter.py # TokenCounter
│ │ ├── truncator.py # ObservationTruncator
│ │ └── builder.py # ContextBuilder
│ ├── observability/ # 可观测性
│ │ └── trace_logger.py # TraceLogger
│ └── skills/ # Skills 系统
│ └── loader.py # SkillLoader
├── docs/ # 文档
├── examples/ # 示例代码
└── tests/ # 测试用例
```
## 🤝 贡献
欢迎贡献代码!请遵循以下步骤:
1. Fork 本仓库
2. 创建特性分支 (`git checkout -b feature/AmazingFeature`)
3. 提交更改 (`git commit -m 'Add some AmazingFeature'`)
4. 推送到分支 (`git push origin feature/AmazingFeature`)
5. 开启 Pull Request
## 📄 许可证
本项目采用 [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) 许可证 - 查看 [LICENSE](LICENSE) 文件了解详情。
**许可证要点**:
- ✅ **署名** (Attribution): 使用时需要注明原作者
- ✅ **相同方式共享** (ShareAlike): 修改后的作品需使用相同许可证
- ⚠️ **非商业性使用** (NonCommercial): 不得用于商业目的
如需商业使用,请联系项目维护者获取授权。
## 🙏 致谢
- 感谢 [Datawhale](https://github.com/datawhalechina) 提供的优秀开源教程
- 感谢 [Hello-Agents 教程](https://github.com/datawhalechina/hello-agents) 的所有贡献者
- 感谢所有为智能体技术发展做出贡献的研究者和开发者
## 📚 文档资源
详细了解 HelloAgents v1.0.0 的 16 项核心能力:
### 基础设施
- **[工具响应协议](./docs/tool-response-protocol.md)** - ToolResponse 统一返回格式
- **[上下文工程](./docs/context-engineering-guide.md)** - HistoryManager/TokenCounter/Truncator
### 核心能力
- **[可观测性](./docs/observability-guide.md)** - TraceLogger 追踪系统
- **[熔断器](./docs/circuit-breaker-guide.md)** - CircuitBreaker 容错机制
- **[会话持久化](./docs/session-persistence-guide.md)** - SessionStore 会话管理
### 增强能力
- **[子代理机制](./docs/subagent-guide.md)** - TaskTool 与 ToolFilter
- **[Skills 知识外化](./docs/skills-usage-guide.md)** - 技能系统使用指南
- **[乐观锁](./docs/file_tools.md)** - 文件编辑工具的并发控制
- **[TodoWrite 进度管理](./docs/todowrite-usage-guide.md)** - 任务进度追踪
### 辅助功能
- **[DevLog 决策日志](./docs/devlog-guide.md)** - 开发决策记录
- **[异步生命周期](./docs/async-agent-guide.md)** - 异步 Agent 实现
### 核心架构
- **[流式输出](./docs/streaming-sse-guide.md)** - SSE 流式响应
- **[Function Calling 架构](./docs/function-calling-architecture.md)** - LLM/Agent 基类重构
- **[日志系统](./docs/logging-system-guide.md)** - 四种日志范式
### 扩展能力
- **[自定义工具扩展](./docs/custom_tools_guide.md)** - 三种工具实现方式(函数式/标准类/可展开)
---
<div align="center">
**HelloAgents** - 让智能体开发变得简单而强大 🚀
</div>
| text/markdown | null | HelloAgents Team <jjyaoao@126.com> | null | null | CC-BY-NC-SA-4.0 | null | [] | [] | null | null | >=3.10 | [] | [] | [] | [
"openai<2.0.0,>=1.0.0",
"requests<3.0.0,>=2.25.0",
"python-dotenv<2.0.0,>=0.19.0",
"pydantic<3.0.0,>=2.0.0",
"numpy<3.0.0,>=2.0.0",
"networkx<4.0.0,>=2.6.0",
"tiktoken>=0.5.0",
"pyyaml>=6.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/jjyaoao/HelloAgents",
"Documentation, https://github.com/jjyaoao/HelloAgents/blob/main/README.md",
"Repository, https://github.com/jjyaoao/HelloAgents",
"Bug Tracker, https://github.com/jjyaoao/HelloAgents/issues"
] | twine/6.2.0 CPython/3.10.19 | 2026-02-21T05:55:53.866421 | hello_agents-1.0.0.tar.gz | 250,955 | 62/49/37e33daf797f418dda50a1db134f75f902e0da1f66118052f54e31e457ff/hello_agents-1.0.0.tar.gz | source | sdist | null | false | a7fe947bd040c001f16dc55d26cffa5c | 9154985cd49684fac80c64a728b10be95b63f9ab95b2309dae294de2a3089d25 | 624937e33daf797f418dda50a1db134f75f902e0da1f66118052f54e31e457ff | null | [
"LICENSE"
] | 245 |
2.4 | gbizinfo | 0.1.1 | gBizINFO REST API v2 client library for Python | # gbizinfo — gBizINFO REST API v2 Python クライアント
[](https://pypi.org/project/gbizinfo/)
[](https://pypi.org/project/gbizinfo/)
[](https://context7.com/youseiushida/gbizinfo)
[](https://context7.com/youseiushida/gbizinfo/llms.txt)
**gbizinfo** は、[gBizINFO(法人活動情報)](https://info.gbiz.go.jp/) の REST API v2 に対応した Python クライアントライブラリです。法人検索・法人番号指定取得・差分更新の全エンドポイントをサポートし、同期・非同期クライアント、自動ページング、ローカルキャッシュ、リトライ、レート制限を提供します。内部の HTTP 通信には [httpx](https://github.com/encode/httpx) を使用しています。
[GitHub Repository](https://github.com/youseiushida/gbizinfo)
> **API 側の既知の問題(2025年2月時点)**
>
> 以下のエンドポイント / パラメータは OpenAPI 仕様に定義されていますが、API サーバー側で正常に動作しないことが確認されています。本ライブラリは仕様通りにリクエストを送信しますが、サーバー側の問題により 404 または 500 が返る場合があります。
>
> - **検索パラメータ `patent`・`certification`**: 検索リクエストに含めると 404 が返ります
> - **差分更新エンドポイント `/v2/hojin/updateInfo/patent`・`/v2/hojin/updateInfo/subsidy`**: 500 Internal Server Error が返ります
## インストール
```sh
pip install gbizinfo
```
## クイックスタート
API トークンは [gBizINFO](https://info.gbiz.go.jp/) から取得してください。
```python
from gbizinfo import GbizClient
with GbizClient(api_token="YOUR_TOKEN") as client:
# 法人名で検索
result = client.search(name="トヨタ", limit=5)
for item in result.items:
print(item.corporate_number, item.name)
# 法人番号で詳細取得
info = client.get("1180301018771")
print(info.name, info.location, info.capital_stock)
```
環境変数 `GBIZINFO_API_TOKEN` を設定すれば、引数省略も可能です。
```sh
export GBIZINFO_API_TOKEN="YOUR_TOKEN"
```
```python
with GbizClient() as client: # 環境変数から自動取得
result = client.search(name="ソニー")
```
## 法人検索
`search()` は 30 以上の検索パラメータをサポートしています。Enum による型安全な指定が可能です。
```python
from gbizinfo import GbizClient
from gbizinfo.enums import Prefecture, CorporateType, Ministry, Source
with GbizClient(api_token="YOUR_TOKEN") as client:
# 都道府県 + 法人種別
result = client.search(
prefecture=Prefecture.東京都,
corporate_type=CorporateType.株式会社,
limit=10,
)
# 複数の法人種別を指定
result = client.search(
corporate_type=[CorporateType.株式会社, CorporateType.合同会社],
prefecture=Prefecture.大阪府,
)
# 資本金・従業員数の範囲指定
result = client.search(
capital_stock_from=100_000_000,
capital_stock_to=500_000_000,
employee_number_from=1000,
prefecture=Prefecture.愛知県,
)
# 出典元・担当府省で絞り込み
result = client.search(source=Source.調達, ministry=Ministry.国税庁)
# 補助金・調達キーワード検索
result = client.search(subsidy="環境", prefecture=Prefecture.東京都)
result = client.search(procurement="情報", prefecture=Prefecture.東京都)
```
### 職場情報 Enum
職場情報パラメータも Enum で型安全に指定できます。
```python
from gbizinfo.enums import (
AverageAge,
AverageContinuousServiceYears,
MonthAverageOvertimeHours,
FemaleWorkersProportion,
)
result = client.search(
average_age=AverageAge.歳30以下,
prefecture=Prefecture.東京都,
)
result = client.search(
female_workers_proportion=FemaleWorkersProportion.割合61以上,
prefecture=Prefecture.東京都,
)
```
## 法人番号指定取得
法人番号(13桁)を指定して詳細情報を取得します。チェックデジットの自動バリデーション付きです。
```python
info = client.get("7000012050002") # 国税庁
print(info.name) # "国税庁"
print(info.corporate_number) # "7000012050002"
print(info.location) # 所在地
print(info.capital_stock) # 資本金
print(info.employee_number) # 従業員数
print(info.date_of_establishment) # 設立年月日
```
### サブリソース
法人番号に紐づく各種情報を個別に取得できます。
```python
cert = client.get_certification("7000012050002") # 届出・認定
comm = client.get_commendation("7000012050002") # 表彰
corp = client.get_corporation("7000012050002") # 届出認定
fin = client.get_finance("7000012050002") # 財務
pat = client.get_patent("7000012050002") # 特許
proc = client.get_procurement("7000012050002") # 調達
sub = client.get_subsidy("7000012050002") # 補助金
work = client.get_workplace("7000012050002") # 職場情報
```
## 差分更新
指定期間内に更新された法人情報を取得します。
```python
from datetime import date, timedelta
to_date = date.today()
from_date = to_date - timedelta(days=3)
result = client.get_update_info(from_date=from_date, to_date=to_date)
print(result.total_count) # 総件数
print(result.total_page) # 総ページ数
print(len(result.items)) # 取得件数
for item in result.items:
print(item.corporate_number, item.name)
```
カテゴリ別の差分更新エンドポイントも対応しています。
```python
client.get_update_certification(from_date=..., to_date=...)
client.get_update_commendation(from_date=..., to_date=...)
client.get_update_corporation(from_date=..., to_date=...)
client.get_update_finance(from_date=..., to_date=...)
client.get_update_patent(from_date=..., to_date=...)
client.get_update_procurement(from_date=..., to_date=...)
client.get_update_subsidy(from_date=..., to_date=...)
client.get_update_workplace(from_date=..., to_date=...)
```
## to_flat_dict() によるフラット化
ネストされた法人情報をフラットな辞書に変換できます。pandas の DataFrame 化に便利です。
```python
info = client.get("1180301018771")
# リストの扱い方を 4 種類から選択
flat = info.to_flat_dict(lists="count") # リスト → 件数のみ
flat = info.to_flat_dict(lists="first") # リスト → 先頭要素のみ展開
flat = info.to_flat_dict(lists="json") # リスト → JSON 文字列
flat = info.to_flat_dict(lists="explode") # リスト → _0, _1, ... に展開
# SearchResult / UpdateResult からまとめてフラット化
result = client.search(name="トヨタ", limit=10)
dicts = result.to_flat_dicts(lists="count")
# pandas DataFrame への変換例
import pandas as pd
df = pd.DataFrame(dicts)
```
## 非同期クライアント
`AsyncGbizClient` をインポートし、`await` を付けるだけです。API は同期版と同一です。
```python
import asyncio
from gbizinfo import AsyncGbizClient
async def main():
async with AsyncGbizClient(api_token="YOUR_TOKEN") as client:
result = await client.search(name="ソニー", limit=3)
for item in result.items:
print(item.name)
info = await client.get("1180301018771")
print(info.name)
asyncio.run(main())
```
非同期クライアントは `max_concurrent` で並行リクエスト数を制御できます(デフォルト 10)。
## 自動ページング
gBizINFO API にはページネーションがあり、検索結果が `limit` 件を超える場合は複数回のリクエストが必要です。`paginate_search()` と `paginate_update_info()` はこれを透過的に処理し、全件をイテレータで返します。
```python
# 検索結果の自動ページング
for item in client.paginate_search(prefecture=Prefecture.東京都, limit=2000):
print(item.corporate_number, item.name)
# 差分更新の自動ページング
from datetime import date
for item in client.paginate_update_info(
from_date=date(2025, 2, 1),
to_date=date(2025, 2, 5),
):
print(item.corporate_number, item.update_date)
```
安全のため、10 ページ(最大 50,000 件)を超えると `PaginationLimitExceededError` が送出されます。検索条件を絞り込むか、期間を短くして分割取得してください。
### get_recent_updates()
過去 N 日分の更新を簡単に取得するヘルパーです。
```python
for item in client.get_recent_updates(days=7):
print(item.corporate_number, item.name)
```
## Enum 一覧
マジックストリングを排除し、IDE の補完で安全にパラメータを指定できます。
| Enum | 用途 | 例 |
|:---|:---|:---|
| `Prefecture` | 都道府県(47) | `Prefecture.東京都` → `"13"` |
| `CorporateType` | 法人種別(10) | `CorporateType.株式会社` → `"301"` |
| `Region` | 地域(10) | `Region.関東.prefectures` → 都道府県タプル |
| `Ministry` | 担当府省(49) | `Ministry.国税庁` |
| `Source` | 出典元(6) | `Source.調達` |
| `AverageAge` | 平均年齢区分 | `AverageAge.歳30以下` |
| `AverageContinuousServiceYears` | 平均勤続年数区分 | `AverageContinuousServiceYears.年21以上` |
| `MonthAverageOvertimeHours` | 平均残業時間区分 | `MonthAverageOvertimeHours.時間20未満` |
| `FemaleWorkersProportion` | 女性労働者比率区分 | `FemaleWorkersProportion.割合61以上` |
| `BusinessItem` | 営業品目 | `BusinessItem.情報処理` |
| `QualificationType` | 全省庁統一資格種別 | `QualificationType.物品の製造` |
| `PatentClassification` | 特許分類(133) | `PatentClassification.食品_食料品` |
| `DesignClassification` | 意匠分類(57) | `DesignClassification.衣服` |
| `TrademarkClassification` | 商標分類(45) | `TrademarkClassification.化学品` |
| `PatentType` | 知財種別 | `PatentType.特許` |
すべての Enum は `StrEnum` を継承しているため、文字列としてそのまま使用できます。
```python
from gbizinfo.enums import Prefecture
Prefecture.東京都 == "13" # True
```
## キャッシュ
`cache_dir` を指定するとローカルファイルキャッシュが有効になります。TTL(デフォルト 24 時間)内は API を呼ばずにキャッシュから返却します。
```python
from gbizinfo import GbizClient
from gbizinfo.config import CacheMode
client = GbizClient(
api_token="YOUR_TOKEN",
cache_dir="./cache", # キャッシュディレクトリ
cache_mode=CacheMode.READ_WRITE,
cache_ttl=60 * 60 * 12, # 12 時間
)
# キャッシュモード
# CacheMode.OFF キャッシュ無効(デフォルト)
# CacheMode.READ 読み取りのみ(書き込みしない)
# CacheMode.READ_WRITE 読み書き両方
# CacheMode.FORCE_REFRESH 常にAPIから再取得し、キャッシュを更新
```
## エラーハンドリング
API エラーや通信エラーは、種別に応じた例外クラスで送出されます。すべての例外は `GbizError` を継承しています。
```python
import gbizinfo
from gbizinfo import GbizClient
with GbizClient(api_token="YOUR_TOKEN") as client:
try:
info = client.get("7000012050002")
except gbizinfo.GbizBadRequestError as e:
# 400: パラメータ誤り(errors 配列付き)
print(e.context.status_code, e.errors)
except gbizinfo.GbizUnauthorizedError as e:
# 401: トークン無効
print(e.context.status_code)
except gbizinfo.GbizNotFoundError as e:
# 404: 法人番号未登録
print(e.context.status_code)
except gbizinfo.GbizRateLimitError as e:
# 429: レート制限超過
print(e.context.retry_after)
except gbizinfo.GbizServerError as e:
# 5xx: サーバーエラー
print(e.context.status_code)
except gbizinfo.GbizTransportError as e:
# ネットワーク接続エラー
print(e.original)
except gbizinfo.GbizValidationError as e:
# 送信前バリデーションエラー
print(e)
```
例外の一覧:
| ステータス | 例外クラス | 説明 |
|:---|:---|:---|
| 400 | `GbizBadRequestError` | パラメータ誤り(`errors` 配列付き) |
| 401 | `GbizUnauthorizedError` | トークン無効 |
| 403 | `GbizForbiddenError` | アクセス禁止 |
| 404 | `GbizNotFoundError` | リソース未発見 |
| 429 | `GbizRateLimitError` | レート制限超過 |
| 5xx | `GbizServerError` | サーバーエラー |
| --- | `GbizTransportError` | ネットワーク接続エラー |
| --- | `GbizTimeoutError` | タイムアウト |
| --- | `GbizValidationError` | 送信前バリデーション |
| --- | `GbizCorporateNumberError` | 法人番号バリデーション |
| --- | `PaginationLimitExceededError` | ページング上限超過 |
## リトライ
通信エラー(タイムアウト、接続エラー)および 429 / 5xx は、指数バックオフ + ジッターで自動リトライされます(デフォルト最大 5 回)。`Retry-After` ヘッダーにも対応しています。
```python
client = GbizClient(
api_token="YOUR_TOKEN",
retry_max_attempts=3, # 最大試行回数(デフォルト: 5)
retry_base_delay=1.0, # バックオフ基準秒(デフォルト: 0.5)
retry_cap_delay=16.0, # バックオフ上限秒(デフォルト: 8.0)
)
# リトライ無効化
client = GbizClient(api_token="YOUR_TOKEN", retry_max_attempts=1)
```
## レート制限
高頻度アクセスによる接続遮断を防ぐため、デフォルトで 1 秒あたり 1 リクエストのレート制限が適用されます。
```python
client = GbizClient(
api_token="YOUR_TOKEN",
rate_limit_per_sec=1.5, # 1.5 リクエスト/秒
)
```
## タイムアウト
デフォルトのタイムアウトは 30 秒です。
```python
client = GbizClient(api_token="YOUR_TOKEN", timeout=60.0)
```
## HTTP クライアントのカスタマイズ
内部の [httpx](https://www.python-httpx.org/) クライアントを直接指定できます。
```python
import httpx
from gbizinfo import GbizClient
# プロキシ経由
client = GbizClient(api_token="YOUR_TOKEN", proxy="http://my.proxy:8080")
# HTTP/2 有効化(pip install 'httpx[http2]' が必要)
client = GbizClient(api_token="YOUR_TOKEN", http2=True)
# 外部 httpx.Client を注入(ライフサイクルはユーザー管理)
http_client = httpx.Client(
base_url="https://api.info.gbiz.go.jp/hojin",
timeout=60.0,
)
client = GbizClient(api_token="YOUR_TOKEN", http_client=http_client)
```
## HTTP リソースの管理
デフォルトでは、`close()` を呼ぶか、コンテキストマネージャを使用して HTTP 接続を解放します。
```python
# コンテキストマネージャ(推奨)
with GbizClient(api_token="YOUR_TOKEN") as client:
result = client.search(name="テスト")
# 手動クローズ
client = GbizClient(api_token="YOUR_TOKEN")
try:
result = client.search(name="テスト")
finally:
client.close()
```
非同期版:
```python
async with AsyncGbizClient(api_token="YOUR_TOKEN") as client:
...
# または
client = AsyncGbizClient(api_token="YOUR_TOKEN")
try:
...
finally:
await client.aclose()
```
## 動作要件
Python 3.12 以上。
| text/markdown | null | YoseiUshida <146376339+youseiushida@users.noreply.github.com> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Libraries",
"Typing :: Typed"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"anyio>=4.12.1",
"httpx>=0.28.1",
"pydantic>=2.12.5"
] | [] | [] | [] | [
"Homepage, https://github.com/youseiushida/gbizinfo",
"Repository, https://github.com/youseiushida/gbizinfo"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T05:55:19.003075 | gbizinfo-0.1.1.tar.gz | 234,498 | 59/f0/2c59b07b0d1e8481ca8594826aad98a71e62581287a024460df0ad90905b/gbizinfo-0.1.1.tar.gz | source | sdist | null | false | d43b8d44748325f5dd716caac79bc258 | 2f6ebfc03a67b412cb09584167fbe875a55fe3ed2befddea8ec4ac3eb62eb23d | 59f02c59b07b0d1e8481ca8594826aad98a71e62581287a024460df0ad90905b | MIT | [
"LICENSE"
] | 211 |
2.4 | codehydra | 2026.2.21 | Multi-workspace IDE for parallel AI agent development | # CodeHydra
Multi-workspace IDE for parallel AI agent development.
## Installation
```bash
# Run directly without installation
uvx codehydra
# Or install globally
pip install codehydra
codehydra
```
## Features
- Run multiple AI agents simultaneously in isolated git worktrees
- Real-time status monitoring across all workspaces
- Keyboard-driven navigation (Alt+X shortcut mode)
- Full VS Code integration via code-server
- Built-in voice dictation
## How It Works
This package downloads the appropriate CodeHydra binary for your platform from GitHub Releases on first run, caches it locally, and executes it with any passed arguments.
Supported platforms:
- Linux x64
- macOS x64 and arm64
- Windows x64
## Links
- [GitHub Repository](https://github.com/stefanhoelzl/codehydra)
- [Releases](https://github.com/stefanhoelzl/codehydra/releases)
## License
MIT
| text/markdown | Stefan Hoelzl | null | null | null | null | ai, agent, ide, vscode, git, worktree | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming ... | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/stefanhoelzl/codehydra",
"Repository, https://github.com/stefanhoelzl/codehydra",
"Issues, https://github.com/stefanhoelzl/codehydra/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:53:43.173863 | codehydra-2026.2.21.tar.gz | 2,748 | f3/fb/75b9624b362e7f7193da51cb13d97fa0e29d5bc56140adf38a9a73b4ab22/codehydra-2026.2.21.tar.gz | source | sdist | null | false | 311a68b43dffec4e475a0eafb007d08e | b5d43ad01a05b695a8ef0c62d9d31463be12e1ef277ab3bfda243ffacf74ba25 | f3fb75b9624b362e7f7193da51cb13d97fa0e29d5bc56140adf38a9a73b4ab22 | MIT | [] | 209 |
2.3 | karpo-op-sdk | 0.3.0 | The official Python library for the karpo API | # Karpo Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/karpo-op-sdk/)
The Karpo Python library provides convenient access to the Karpo REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [docs.karpo.ai](https://docs.karpo.ai). The full API of this library can be found in [api.md](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install karpo-op-sdk
```
## Usage
The full API of this library can be found in [api.md](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/api.md).
```python
import os
from karpo_sdk import Karpo
client = Karpo(
api_key=os.environ.get("KARPO_API_KEY"), # This is the default and can be omitted
)
page = client.agents.list()
print(page.data)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `KARPO_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncKarpo` instead of `Karpo` and use `await` with each API call:
```python
import os
import asyncio
from karpo_sdk import AsyncKarpo
client = AsyncKarpo(
api_key=os.environ.get("KARPO_API_KEY"), # This is the default and can be omitted
)
async def main() -> None:
page = await client.agents.list()
print(page.data)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install karpo-op-sdk[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from karpo_sdk import DefaultAioHttpClient
from karpo_sdk import AsyncKarpo
async def main() -> None:
async with AsyncKarpo(
api_key=os.environ.get("KARPO_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
page = await client.agents.list()
print(page.data)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Karpo API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
from karpo_sdk import Karpo
client = Karpo()
all_agents = []
# Automatically fetches more pages as needed.
for agent in client.agents.list():
# Do something with agent here
all_agents.append(agent)
print(all_agents)
```
Or, asynchronously:
```python
import asyncio
from karpo_sdk import AsyncKarpo
client = AsyncKarpo()
async def main() -> None:
all_agents = []
# Iterate through items across all pages, issuing requests as needed.
async for agent in client.agents.list():
all_agents.append(agent)
print(all_agents)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await client.agents.list()
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.data)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await client.agents.list()
for agent in first_page.data:
print(agent.id)
# Remove `await` for non-async usage.
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `karpo_sdk.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `karpo_sdk.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `karpo_sdk.APIError`.
```python
import karpo_sdk
from karpo_sdk import Karpo
client = Karpo()
try:
client.agents.list()
except karpo_sdk.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except karpo_sdk.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except karpo_sdk.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from karpo_sdk import Karpo
# Configure the default for all requests:
client = Karpo(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).agents.list()
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from karpo_sdk import Karpo
# Configure the default for all requests:
client = Karpo(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Karpo(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).agents.list()
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `KARPO_LOG` to `info`.
```shell
$ export KARPO_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from karpo_sdk import Karpo
client = Karpo()
response = client.agents.with_raw_response.list()
print(response.headers.get('X-My-Header'))
agent = response.parse() # get the object that `agents.list()` would have returned
print(agent.id)
```
These methods return an [`APIResponse`](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/src/karpo_sdk/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/src/karpo_sdk/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.agents.with_streaming_response.list() as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from karpo_sdk import Karpo, DefaultHttpxClient
client = Karpo(
# Or use the `KARPO_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from karpo_sdk import Karpo
with Karpo() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/machinepulse-ai/karpo-op-python-sdk/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import karpo_sdk
print(karpo_sdk.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Karpo <contact@example.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/machinepulse-ai/karpo-op-python-sdk",
"Repository, https://github.com/machinepulse-ai/karpo-op-python-sdk"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-21T05:53:29.680103 | karpo_op_sdk-0.3.0.tar.gz | 136,083 | 53/05/3c67acf2ebd54c31a1cf46ecf78e26abd49b8061c34d728068900ac12d3f/karpo_op_sdk-0.3.0.tar.gz | source | sdist | null | false | 855a6259d52bb454c2c47055f9ec2bec | 509c8fe4aeb6876b76a8770891a5ec5256efc2738d6cb3a119c283737f83f0b7 | 53053c67acf2ebd54c31a1cf46ecf78e26abd49b8061c34d728068900ac12d3f | null | [] | 229 |
2.4 | pykyber | 0.1.2 | Kyber post-quantum key encapsulation in Rust | # PyKyber
A Python library for Kyber post-quantum key encapsulation, implemented in Rust.
## Installation
```bash
pip install pykyber
```
## Quick Start
```python
import pykyber
# Generate a keypair (Alice)
alice_keypair = pykyber.Kyber768()
# Encapsulate - create shared secret (Bob)
bob_result = pykyber.Kyber768.encapsulate(alice_keypair.public_key)
# Decapsulate - recover shared secret (Alice)
shared_secret = alice_keypair.decapsulate(bob_result.ciphertext)
# Both parties now share the same secret
print(f"Match: {bob_result.shared_secret == shared_secret}")
```
## API Usage
### Class-based API (Recommended)
The simplest way to use Kyber:
```python
import pykyber
# Create a keypair - instant generation on class instantiation
keypair = pykyber.Kyber512() # ~AES-128 security
keypair = pykyber.Kyber768() # ~AES-192 security
keypair = pykyber.Kyber1024() # ~AES-256 security
# Access raw key bytes
public_key = keypair.public_key # bytes
secret_key = keypair.secret_key # bytes
# Encapsulate - create ciphertext and shared secret
result = keypair.encapsulate()
# result.ciphertext - bytes to send to receiver
# result.shared_secret - 32 bytes shared secret
# Decapsulate - recover shared secret from ciphertext
shared_secret = keypair.decapsulate(result.ciphertext)
```
## Key Sizes
| Variant | Public Key | Secret Key | Ciphertext | Shared Secret |
|-----------|------------|------------|------------|---------------|
| Kyber-512 | 800 bytes | 1632 bytes | 768 bytes | 32 bytes |
| Kyber-768 | 1184 bytes | 2400 bytes | 1088 bytes | 32 bytes |
| Kyber-1024| 1568 bytes | 3168 bytes | 1408 bytes | 32 bytes |
## License
MIT
| text/markdown; charset=UTF-8; variant=GFM | PyKyber Contributors | null | null | null | MIT | kyber, post-quantum, cryptography, pqc, key-encapsulation | [
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T05:53:01.012395 | pykyber-0.1.2-cp38-abi3-musllinux_1_2_i686.whl | 541,794 | 04/29/a7965d2f69690076da9d112397ed3f3132a2c125a583c0a4f454afb05bd4/pykyber-0.1.2-cp38-abi3-musllinux_1_2_i686.whl | cp38 | bdist_wheel | null | false | dd7ad749efbe5929c027d2553c333628 | d96c94fca54cef5e07f8c87507bb1a5d4bb00cca91ef2bf0d3c3d8588b1345e2 | 0429a7965d2f69690076da9d112397ed3f3132a2c125a583c0a4f454afb05bd4 | null | [
"LICENSE"
] | 920 |
2.4 | pyThermoEst | 0.4.0 | A Python toolkit for estimating thermodynamic properties | # 🧪 PyThermoEst
[](https://pepy.tech/projects/pythermoest)



PyThermoEst is a Python toolkit for estimating thermodynamic properties from group-contribution methods. It currently supports the **Joback** method for estimating a wide range of properties and the **Zabransky–Ruzicka** method for predicting the liquid-phase heat capacity.
## 📦 Installation
```bash
pip install pyThermoEst
```
## 🚀 Usage
The package exposes convenience helpers in `pyThermoEst.app` for both calculation workflows. See the `examples/` directory for runnable scripts.
### 🧬 Joback property estimation
Provide Joback group contributions along with the total number of atoms, then call `joback_calc`. You can mix field names or their documented aliases when building `JobackGroupContributions`.
```python
from pyThermoEst import joback_calc
from pyThermoEst.models import JobackGroupContributions, GroupUnit
payload = {
"-CH3": GroupUnit(value=2),
"=CH- @ring": GroupUnit(value=3),
"=C< @ring": GroupUnit(value=3),
"-OH @phenol": GroupUnit(value=1),
}
joback_groups = JobackGroupContributions(**payload)
# or dict with aliases:
# joback_groups = {
# "-CH3": 2,
# "=CH- @ring": 3,
# "=C< @ring": 3,
# "-OH @phenol": 1,
# }
result = joback_calc(groups=joback_groups, total_atoms_number=18)
# Evaluate a temperature-dependent property, e.g., heat capacity at 273 K
cp_273 = result["heat_capacity"]["value"](273)
print(cp_273)
```
### 💧 Zabransky–Ruzicka liquid heat capacity
To compute liquid heat capacity, provide required group contributions and optional correction terms, then call `zabransky_ruzicka_calc`.
```python
from pyThermoEst import zabransky_ruzicka_calc
from pyThermoEst.models import (
ZabranskyRuzickaGroupContributions,
ZabranskyRuzickaGroupContributionsCorrections,
GroupUnit,
)
payload = {
"C-(H)3(O)": GroupUnit(value=2),
"CO-(O)2": GroupUnit(value=1),
"O-(C)(CO)": GroupUnit(value=2),
}
contributions = ZabranskyRuzickaGroupContributions(**payload)
corrections = ZabranskyRuzickaGroupContributionsCorrections()
# or dicts with aliases:
# contributions = {
# "C-(H)3(O)": 2,
# "CO-(O)2": 1,
# "O-(C)(CO)": 2,
# }
result = zabransky_ruzicka_calc(
group_contributions=contributions,
group_corrections=corrections
)
cp_liq_at_300k = result["value"](298.15)
print(cp_liq_at_300k, result["unit"], result["symbol"])
```
### 📚 Getting group contribution IDs and names
You can inspect available group contribution identifiers and names for each method:
#### 🧬 Joback group contributions
```python
from pyThermoEst.docs.joback import (
joback_group_contribution_info,
joback_group_contribution_names,
joback_group_contribution_ids
)
# Get all group contribution IDs (aliases)
group_ids = joback_group_contribution_ids()
# Get all group contribution names (field names)
group_names = joback_group_contribution_names()
# Get both names and IDs as tuples
names, ids = joback_group_contribution_info()
```
#### 💧 Zabransky–Ruzicka group contributions
```python
from pyThermoEst.docs.zabransky_ruzicka import (
zabransky_ruzicka_group_contribution_info,
zabransky_ruzicka_group_contribution_names,
zabransky_ruzicka_group_contribution_ids,
zabransky_ruzicka_group_correction_info,
zabransky_ruzicka_group_correction_ids,
zabransky_ruzicka_group_correction_names
)
# Get group contribution IDs (aliases)
group_ids = zabransky_ruzicka_group_contribution_ids()
# Get group contribution names (field names)
group_names = zabransky_ruzicka_group_contribution_names()
# Get both names and IDs as tuples
names, ids = zabransky_ruzicka_group_contribution_info()
# Get correction term IDs
correction_ids = zabransky_ruzicka_group_correction_ids()
# Get correction term names
correction_names = zabransky_ruzicka_group_correction_names()
# Get both correction names and IDs as tuples
corr_names, corr_ids = zabransky_ruzicka_group_correction_info()
```
### 📖 Further examples
- `examples/joback-exp-0.py`: Inspect available Joback group IDs and names.
- `examples/joback-exp-1.py`: Build Joback group payloads with field names or aliases.
- `examples/joback-exp-2.py`: Full Joback calculation including temperature evaluation.
- `examples/zabransky-ruzicka-exp-0.py`: Inspect available Zabransky–Ruzicka group IDs, names, and corrections.
- `examples/zabransky-ruzicka-exp-1.py`: Zabransky–Ruzicka calculation with optional corrections.
## 🔧 API reference
- `pyThermoEst.app.joback_calc(groups, total_atoms_number)`: Runs Joback method and returns calculated properties.
- `pyThermoEst.app.zabransky_ruzicka_calc(group_contributions, group_corrections=None)`: Returns an equation for liquid heat capacity plus units and symbol metadata.
Each function accepts either the pydantic models or plain dictionaries keyed by group identifiers; aliases are supported for convenience.
## 📝 License
This project is licensed under the MIT License. You are free to use, modify, and distribute this software in your own applications or projects. However, if you choose to use this app in another app or software, please ensure that my name, Sina Gilassi, remains credited as the original author. This includes retaining any references to the original repository or documentation where applicable. By doing so, you help acknowledge the effort and time invested in creating this project.
## ❓ FAQ
For any question, contact me on [LinkedIn](https://www.linkedin.com/in/sina-gilassi/)
| text/markdown | null | Sina Gilassi <sina.gilassi@gmail.com> | null | null | null | chemical-engineering, thermodynamics, property-estimation, joback-method, zabransky-ruzicka-method, group-contribution-methods, parameter-estimation | [
"Development Status :: 1 - Planning",
"Intended Audience :: Education",
"Programming Language :: Python :: 3.11",
"Operating System :: Unix",
"Operating System :: MacOS :: MacOS X",
"Operating System :: Microsoft :: Windows"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=2.3.5",
"scipy>=1.16.3",
"pandas>=2.3.3",
"pyyaml>=6.0.3",
"pydantic>=2.12.4",
"pydantic-settings>=2.12.0",
"pythermodb-settings",
"pycuc"
] | [] | [] | [] | [
"Homepage, https://github.com/sinagilassi/PyThermoEst",
"Documentation, https://pythermoest.readthedocs.io/en/latest/",
"Source, https://github.com/sinagilassi/PyThermoEst",
"Tracker, https://github.com/sinagilassi/PyThermoEst/issues"
] | uv/0.8.3 | 2026-02-21T05:52:59.683461 | pythermoest-0.4.0.tar.gz | 50,427 | 78/bb/b2382d8a701fe64b47c6d21377f7f38c029b788139d05c79aef440b43181/pythermoest-0.4.0.tar.gz | source | sdist | null | false | 7546d02823ff9f2dd63c1511926729f0 | 4d9c9abef6743a33adcc415d12bba1d2c5cc34ff30d79a117e3c70c9201af3a2 | 78bbb2382d8a701fe64b47c6d21377f7f38c029b788139d05c79aef440b43181 | MIT | [
"LICENSE"
] | 0 |
2.4 | tamilstring | 2.2.0 | tamilstring helps to handle tamil unicode characters lot more easier | # TamilString
[](https://pypi.org/project/tamilstring/)
[](https://gitlab.com/boopalan-dev/tamilstring/-/blob/main/LICENSE)
**English:**
TamilString is a Python library designed to simplify the handling and manipulation of Tamil Unicode characters, enabling developers to process Tamil text more efficiently in their applications.
**தமிழ்:**
TamilString என்பது தமிழ் யூனிகோட் எழுத்துகளை எளிதாக கையாளவும், செயலாக்கவும் உதவும் ஒரு Python நூலகமாகும், இது டெவலப்பர்களுக்கு தங்கள் பயன்பாடுகளில் தமிழ் உரையை சிறப்பாக செயல்படுத்த உதவுகிறது.
## Table of Contents
1. [Inspiration - தூண்டுதல்](#inspiration---தூண்டுதல்)
2. [Features - அம்சங்கள்](#features---அம்சங்கள்)
3. [Installation - நிறுவல்](#installation---நிறுவல்)
4. [Usage - பயன்பாடு](#usage---பயன்பாடு)
5. [Contributing - பங்களிப்பு](#contributing---பங்களிப்பு)
6. [License - உரிமம்](#license---உரிமம்)
7. [Acknowledgments - நன்றியுரைகள்](#acknowledgments---நன்றியுரைகள்)
8. [Contributors - பங்களிப்பாளர்கள்](#contributors---பங்களிப்பாளர்கள்)
## Inspiration - தூண்டுதல்
**English:**
TamilString was inspired by the [Open-Tamil](https://pypi.org/project/Open-Tamil/) project, which offers a set of Python libraries for Tamil text processing. While Open-Tamil laid the groundwork, TamilString aims to enhance and expand these capabilities. For instance, TamilString addresses specific issues found in Open-Tamil, such as the inaccurate output when handling complex Tamil ligatures like 'ஸ்ரீ'. By improving the processing of such characters, TamilString provides more accurate and reliable results for developers working with Tamil text.
**தமிழ்:**
TamilString திட்டம் [Open-Tamil](https://pypi.org/project/Open-Tamil/) திட்டத்தால் தூண்டப்பட்டது, இது தமிழ் உரை செயலாக்கத்திற்கான Python நூலகங்களை வழங்குகிறது. Open-Tamil அடித்தளத்தை அமைத்தபோதிலும், TamilString இந்த திறன்களை மேம்படுத்த மற்றும் விரிவாக்க நோக்கத்துடன் உருவாக்கப்பட்டது. உதாரணமாக, Open-Tamil இல் காணப்படும் 'ஸ்ரீ' போன்ற சிக்கலான தமிழ் லிகேச்சர்களை கையாளும்போது ஏற்படும் தவறான வெளியீட்டை TamilString தீர்க்கிறது. இப்படியான எழுத்துகளைச் சரியாக செயலாக்குவதன் மூலம், தமிழ் உரையுடன் பணிபுரியும் டெவலப்பர்களுக்கு TamilString மேலும் துல்லியமான மற்றும் நம்பகமான முடிவுகளை வழங்குகிறது.
## Features - அம்சங்கள்
**English:**
- Comprehensive support for Tamil Unicode character manipulation.
- Functions for transliteration between Tamil and other scripts.
- Tools for text normalization and validation specific to the Tamil language.
**தமிழ்:**
- தமிழ் யூனிகோட் எழுத்துகளை முழுமையாக கையாள்வதற்கான ஆதரவு.
- தமிழ் மற்றும் பிற எழுத்துக்களுக்கிடையே எழுத்துப்பெயர்ப்பு செய்யும் செயல்பாடுகள்.
- தமிழ் மொழிக்கேற்ப உரை சாதாரணமாக்கல் மற்றும் சரிபார்ப்பு கருவிகள்.
## Installation - நிறுவல்
**English:**
Install the latest version of TamilString using pip:
```bash
pip install tamilstring
```
**தமிழ்:**
pip பயன்படுத்தி TamilString இன் சமீபத்திய பதிப்பை நிறுவவும்:
```bash
pip install tamilstring
```
## Usage - பயன்பாடு
**English:**
Here's a basic example demonstrating how to use TamilString:
```python
import tamilstring
# Example function usage
string = 'தமிழ்'
tamil_str = tamilstring.String(string)
# Splitting the string into characters
characters = list(tamil_str)
print(characters)
```
**Output:**
```python
['த', 'மி', 'ழ்']
```
**தமிழ்:**
TamilString ஐ எவ்வாறு பயன்படுத்துவது என்பதை காட்டும் ஒரு அடிப்படை எடுத்துக்காட்டு:
```python
import tamilstring
# எடுத்துக்காட்டு செயல்பாடு பயன்பாடு
string = 'தமிழ்'
tamil_str = tamilstring.String(string)
# எழுத்துக்களைப் பிரித்தல்
characters = list(tamil_str)
print(characters)
```
**வெளியீடு:**
```python
['த', 'மி', 'ழ்']
```
For more detailed usage and advanced features, please refer to the [Documentation](https://tamilstring-011d48.gitlab.io/).
## Contributing - பங்களிப்பு
**English:**
We welcome contributions! If you have suggestions or encounter issues, please raise them in our [GitLab Issues](https://gitlab.com/boopalan-dev/tamilstring/-/issues).
**தமிழ்:**
நாங்கள் பங்களிப்புகளை வரவேற்கிறோம்! உங்களிடம் பரிந்துரைகள் அல்லது சிக்கல்கள் இருந்தால், தயவுசெய்து அவற்றை எங்கள் [GitLab Issues](https://gitlab.com/boopalan-dev/tamilstring/-/issues) இல் பதிவு செய்யவும்.
### Adding Yourself as a Contributor | பங்களிப்பாளராக சேர்க்க
**English:**
At the time of contribution, please add your profile to the list of contributors **before** sending the merge request by including the following HTML snippet in the `README.md` file:
```html
<a href="https://gitlab.com/your_username">
<img src="IMAGE_URL" width="100" height="100" style="border-radius: 50%;" alt="Your Name"/>
</a>
```
**Instructions:**
1. Go to your GitLab profile.
2. Right-click your profile image → “Open image in new tab”.
3. Copy the full image URL from the new tab.
4. Replace `IMAGE_URL` in the above snippet with the copied URL.
5. Replace `your_username` and `Your Name` accordingly.
**தமிழ்:**
பங்களிப்பு செய்யும் போது, merge request அனுப்புவதற்கு முன் `README.md` கோப்பில் பங்களிப்பாளர்கள் பட்டியலில் உங்கள் சுயவிவரத்தை கீழ்காணும் HTML குறியீட்டின் மூலம் சேர்க்கவும்:
```html
<a href="https://gitlab.com/your_username">
<img src="IMAGE_URL" width="100" height="100" style="border-radius: 50%;" alt="உங்கள் பெயர்"/>
</a>
```
**வழிமுறைகள்:**
1. உங்கள் GitLab சுயவிவரத்திற்கு செல்லவும்.
2. சுயவிவரப் படத்தை வலது கிளிக் செய்து “Open image in new tab” என்பதைத் தேர்ந்தெடுக்கவும்.
3. புதிய தாவலில் தோன்றும் URL ஐ முழுவதுமாக copy செய்யவும்.
4. மேலே உள்ள குறியீட்டில் `IMAGE_URL` என்பதை அந்த URL உடன் மாற்றவும்.
5. பின்னர் `your_username` மற்றும் `உங்கள் பெயர்` விவரங்களுடன் மாற்றவும்.
## License - உரிமம்
**English:**
This project is licensed under the MIT License. See the [LICENSE](https://gitlab.com/boopalan-dev/tamilstring/-/blob/main/LICENSE) file for details.
**தமிழ்:**
இந்த திட்டம் MIT உரிமத்தின் கீழ் வழங்கப்படுகிறது. விவரங்களுக்கு [உரிமம்](https://gitlab.com/boopalan-dev/tamilstring/-/blob/main/LICENSE) கோப்பை பார்க்கவும்.
## Acknowledgments - நன்றியுரைகள்
**English:**
Special thanks to all contributors and the open-source community for their invaluable support.
**தமிழ்:**
அனைத்து பங்களிப்பாளர்களுக்கும் மற்றும் திறந்த மூல சமூகத்திற்கும் அவர்களின் மதிப்புமிக்க ஆதரவுக்கு சிறப்பு நன்றி.
## Contributors - பங்களிப்பாளர்கள்
<a href="https://gitlab.com/boopalan-dev">
<img src="https://gitlab.com/uploads/-/system/user/avatar/22134717/avatar.png?s=100" width="100" height="100" style="border-radius: 50%;" alt="Boopalan S"/>
</a>
<a href="https://gitlab.com/anandsundaramoorthysa">
<img src="https://gitlab.com/uploads/-/system/user/avatar/22613937/avatar.png?s=100" width="100" height="100" style="border-radius: 50%;" alt="Anand Sundaramoorthy SA"/>
</a>
<a href="https://gitlab.com/bkmgit">
<img src="https://gitlab.com/uploads/-/system/user/avatar/618404/avatar.png?s=100" width="100" height="100" style="border-radius: 50%;" alt="Benson Muite"/>
</a>
| text/markdown | boopalan | contact.boopalan@gmail.com | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"License :: OSI Approved :: MIT License"
] | [] | https://gitlab.com/boopalan-dev/tamilstring | null | >=3.6 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T05:52:34.423907 | tamilstring-2.2.0.tar.gz | 25,713 | 44/cd/9e7d7f54fcf121eaae9c84179d217120b1fad0e51248eb12192d1d4440ee/tamilstring-2.2.0.tar.gz | source | sdist | null | false | 2e4dfd18a33bf0b03fadadea43970583 | 27e941ceed8a5f6151a9010513227b025c1463675dfcd30ac63504df4e3e2617 | 44cd9e7d7f54fcf121eaae9c84179d217120b1fad0e51248eb12192d1d4440ee | null | [
"LICENCE"
] | 202 |
2.4 | pdf-oxide | 0.3.8 | Fast Python PDF library for text extraction, markdown conversion, and document processing. Rust-powered, 2.1ms mean latency. | # PDF Oxide - The Fastest PDF Library for Python and Rust
The fastest Python PDF library for text extraction, image extraction, and markdown conversion. Built on a Rust core for reliability and speed — mean 1.8ms per document, 3.5× faster than leading industry libraries, 100% pass rate on 3,830 real-world PDFs.
[](https://crates.io/crates/pdf_oxide)
[](https://pypi.org/project/pdf_oxide/)
[](https://pypi.org/project/pdf-oxide/)
[](https://docs.rs/pdf_oxide)
[](https://github.com/yfedoseev/pdf_oxide/actions)
[](https://opensource.org/licenses)
## Quick Start
### Python
```python
from pdf_oxide import PdfDocument
doc = PdfDocument("paper.pdf")
text = doc.extract_text(0)
chars = doc.extract_chars(0)
markdown = doc.to_markdown(0, detect_headings=True)
```
```bash
pip install pdf_oxide
```
### Rust
```rust
use pdf_oxide::PdfDocument;
let mut doc = PdfDocument::open("paper.pdf")?;
let text = doc.extract_text(0)?;
let images = doc.extract_images(0)?;
let markdown = doc.to_markdown(0, Default::default())?;
```
```toml
[dependencies]
pdf_oxide = "0.3"
```
## Why pdf_oxide?
- **Fast** — Rust core, mean 1.8ms per document, 3.5× faster than leading industry libraries, 97% under 10ms
- **Reliable** — 100% pass rate on 3,830 test PDFs, zero panics, zero slow (>5s) PDFs
- **Complete** — Text extraction, image extraction, PDF creation, and editing in one library
- **Dual-language** — First-class Rust API and Python bindings via PyO3
- **Permissive license** — MIT / Apache-2.0 — use freely in commercial and open-source projects
## Features
| Extract | Create | Edit |
|---------|--------|------|
| Text & Layout | Documents | Annotations |
| Images | Tables | Form Fields |
| Forms | Graphics | Bookmarks |
| Annotations | Templates | Links |
| Bookmarks | Images | Content |
## Python API
```python
from pdf_oxide import PdfDocument
doc = PdfDocument("report.pdf")
print(f"Pages: {doc.page_count}")
print(f"Version: {doc.version}")
# Extract text from each page
for i in range(doc.page_count):
text = doc.extract_text(i)
print(f"Page {i}: {len(text)} chars")
# Character-level extraction with positions
chars = doc.extract_chars(0)
for ch in chars:
print(f"'{ch.char}' at ({ch.x:.1f}, {ch.y:.1f})")
# Password-protected PDFs
doc = PdfDocument("encrypted.pdf")
doc.authenticate("password")
text = doc.extract_text(0)
```
## Rust API
```rust
use pdf_oxide::PdfDocument;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut doc = PdfDocument::open("paper.pdf")?;
// Extract text
let text = doc.extract_text(0)?;
// Character-level extraction
let chars = doc.extract_chars(0)?;
// Extract images
let images = doc.extract_images(0)?;
// Vector graphics
let paths = doc.extract_paths(0)?;
Ok(())
}
```
## Performance
Verified against 3,830 PDFs from three independent test suites:
| Corpus | PDFs | Pass Rate |
|--------|-----:|----------:|
| veraPDF (PDF/A compliance) | 2,907 | 100% |
| Mozilla pdf.js | 897 | 99.2% |
| SafeDocs (targeted edge cases) | 26 | 100% |
| **Total** | **3,830** | **100%** |
| Metric | Value |
|--------|-------|
| **Mean latency** | **1.8ms** |
| **p50 latency** | 0.6ms |
| **p90 latency** | 2.6ms |
| **p99 latency** | 18ms |
| **Max latency** | 625ms |
| **Under 10ms** | 98.4% |
| **Slow (>5s)** | 0 |
| **Timeouts** | 0 |
| **Panics** | 0 |
100% pass rate on all valid PDFs — the 7 non-passing files across the corpus are intentionally broken test fixtures (missing PDF header, fuzz-corrupted catalogs, invalid xref streams). v0.3.8 adds a text-only content stream parser that skips graphics operators at the byte level, further reducing parse time on graphics-heavy pages.
## Installation
### Python
```bash
pip install pdf_oxide
```
Wheels available for Linux, macOS, and Windows. Python 3.8–3.14.
### Rust
```toml
[dependencies]
pdf_oxide = "0.3"
```
## Building from Source
```bash
# Clone and build
git clone https://github.com/yfedoseev/pdf_oxide
cd pdf_oxide
cargo build --release
# Run tests
cargo test
# Build Python bindings
maturin develop
```
## Documentation
- **[Getting Started (Rust)](docs/getting-started-rust.md)** - Complete Rust guide
- **[Getting Started (Python)](docs/getting-started-python.md)** - Complete Python guide
- **[API Docs](https://docs.rs/pdf_oxide)** - Full Rust API reference
- **[PDF Spec Reference](docs/spec/pdf.md)** - ISO 32000-1:2008
## Use Cases
- **RAG / LLM pipelines** — Convert PDFs to clean Markdown for retrieval-augmented generation with LangChain, LlamaIndex, or any framework
- **Document processing at scale** — Extract text, images, and metadata from thousands of PDFs in seconds
- **Data extraction** — Pull structured data from forms, tables, and layouts
- **Academic research** — Parse papers, extract citations, and process large corpora
- **PDF generation** — Create invoices, reports, certificates, and templated documents programmatically
## License
Dual-licensed under [MIT](LICENSE-MIT) or [Apache-2.0](LICENSE-APACHE) at your option. Unlike AGPL-licensed alternatives, pdf_oxide can be used freely in any project — commercial or open-source — with no copyleft restrictions.
## Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
```bash
cargo build && cargo test && cargo fmt && cargo clippy -- -D warnings
```
## Citation
```bibtex
@software{pdf_oxide,
title = {PDF Oxide: Fast PDF Toolkit for Rust and Python},
author = {Yury Fedoseev},
year = {2025},
url = {https://github.com/yfedoseev/pdf_oxide}
}
```
---
**Rust** + **Python** | MIT/Apache-2.0 | 100% pass rate on 3,830 PDFs | mean 1.8ms/doc | v0.3.8
| text/markdown; charset=UTF-8; variant=GFM | null | PDF Oxide Contributors <yfedoseev@gmail.com> | null | null | MIT OR Apache-2.0 | pdf, text-extraction, pdf-parser, pdf-library, rag, llm, markdown, document-parser, pdf-to-text, pdf-extraction, fast-pdf, data-extraction | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Langua... | [] | https://github.com/yfedoseev/pdf_oxide | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"API Reference (Rust), https://docs.rs/pdf_oxide",
"Bug Tracker, https://github.com/yfedoseev/pdf_oxide/issues",
"Documentation, https://github.com/yfedoseev/pdf_oxide/blob/main/docs/getting-started-python.md",
"Homepage, https://github.com/yfedoseev/pdf_oxide",
"Repository, https://github.com/yfedoseev/pdf... | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:52:30.588570 | pdf_oxide-0.3.8-cp38-abi3-win_amd64.whl | 2,874,245 | 2a/26/c696f30a3831b60ae45c8ba6a3a54a9ed3454d8c0dc8c0f00e6e5df62d34/pdf_oxide-0.3.8-cp38-abi3-win_amd64.whl | cp38 | bdist_wheel | null | false | 5a70647bf24cc5d285015e900979a258 | 05602097f1e8ff65edb2ad41659996865fdfa56de9147df4501343b46ffca632 | 2a26c696f30a3831b60ae45c8ba6a3a54a9ed3454d8c0dc8c0f00e6e5df62d34 | null | [
"LICENSE-APACHE",
"LICENSE-MIT"
] | 248 |
2.4 | dtSpark | 1.1.0a30 | Secure Personal AI Research Kit - Multi-provider LLM CLI/Web interface with MCP tool integration | # Spark - Secure Personal AI Research Kit
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://sonarcloud.io/summary/new_code?id=Digital-Thought_dtSpark)
[](https://sonarcloud.io/summary/new_code?id=Digital-Thought_dtSpark)
[](https://sonarcloud.io/summary/new_code?id=Digital-Thought_dtSpark)
[](https://sonarcloud.io/summary/new_code?id=Digital-Thought_dtSpark)
[](https://sonarcloud.io/summary/new_code?id=Digital-Thought_dtSpark)
[](https://sonarcloud.io/summary/new_code?id=Digital-Thought_dtSpark)
**Spark** is a powerful, multi-provider LLM interface for conversational AI with integrated tool support. It supports AWS Bedrock, Anthropic Direct API, and Ollama local models through both CLI and Web interfaces.
## Key Features
- **Multi-Provider Support** - AWS Bedrock, Anthropic Direct API, and Ollama local models
- **Dual Interface** - Rich CLI terminal UI and modern Web browser interface
- **MCP Tool Integration** - Connect external tools via Model Context Protocol
- **Intelligent Context Management** - Automatic conversation compaction with model-aware limits
- **Security Features** - Prompt inspection, tool permissions, and audit logging
- **Multiple Database Backends** - SQLite, MySQL, PostgreSQL, and Microsoft SQL Server
## Quick Start
### Installation
```bash
pip install dtSpark
```
### First-Time Setup
Run the interactive setup wizard to configure Spark:
```bash
spark --setup
```
This guides you through:
- LLM provider selection and configuration
- Database setup
- Interface preferences
- Security settings
### Running Spark
```bash
# Start with CLI interface
spark
# Or use the alternative command
dtSpark
```
## Documentation
Comprehensive documentation is available in the [docs](docs/) folder:
- [Installation Guide](docs/installation.md) - Detailed installation instructions
- [Configuration Reference](docs/configuration.md) - Complete config.yaml documentation
- [Features Guide](docs/features.md) - Detailed feature documentation
- [CLI Reference](docs/cli-reference.md) - Command-line options and chat commands
- [Web Interface](docs/web-interface.md) - Web UI guide
- [MCP Integration](docs/mcp-integration.md) - Tool integration documentation
- [Security](docs/security.md) - Security features and best practices
## Architecture Overview
```mermaid
graph LR
subgraph Interfaces
CLI[CLI]
WEB[Web]
end
subgraph Core
CM[Conversation<br/>Manager]
end
subgraph Providers
BEDROCK[AWS Bedrock]
ANTHROPIC[Anthropic]
OLLAMA[Ollama]
end
subgraph Tools
MCP[MCP Servers]
BUILTIN[Built-in Tools]
end
CLI --> CM
WEB --> CM
CM --> BEDROCK
CM --> ANTHROPIC
CM --> OLLAMA
CM --> MCP
CM --> BUILTIN
```
## Requirements
- Python 3.10 or higher
- AWS credentials (for Bedrock)
- Anthropic API key (for direct API)
- Ollama server (for local models)
## Licence
MIT Licence - see [LICENSE](LICENSE) for details.
## Author
Matthew Westwood-Hill
matthew@digital-thought.org
## Support
- **Documentation**: [docs/](docs/)
- **Issues**: [GitHub Issues](https://github.com/digital-thought/dtSpark/issues)
| text/markdown | Matthew Westwood-Hill | Matthew Westwood-Hill <matthew@digital-thought.org> | null | null | MIT | llm, ai, chatbot, aws, bedrock, anthropic, claude, ollama, mcp, model-context-protocol, cli, web | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Environment :: Web Environment",
"Intended Audience :: Developers",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Pr... | [] | https://github.com/digital-thought/dtSpark | null | >=3.10 | [] | [] | [] | [
"boto3>=1.28.0",
"botocore>=1.31.0",
"fastapi>=0.100.0",
"uvicorn>=0.22.0",
"jinja2>=3.1.0",
"python-multipart>=0.0.6",
"sse-starlette>=1.6.0",
"rich>=13.0.0",
"prompt_toolkit>=3.0.0",
"httpx>=0.24.0",
"aiohttp>=3.8.0",
"mcp>=0.9.0",
"pyyaml>=6.0",
"dtPyAppFramework>=4.3.0",
"tiktoken>=0... | [] | [] | [] | [
"Homepage, https://github.com/digital-thought/dtSpark",
"Documentation, https://github.com/digital-thought/dtSpark#readme",
"Repository, https://github.com/digital-thought/dtSpark",
"Issues, https://github.com/digital-thought/dtSpark/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:52:20.152780 | dtspark-1.1.0a30.tar.gz | 369,814 | c0/6b/19bf39d9bb8c5d0cc873634a3c9e0f2f87f16d96cbeeda5ea6414daa4115/dtspark-1.1.0a30.tar.gz | source | sdist | null | false | 24d9c99a423349fd3f13fa8ae79d0a27 | 6761cc877fb2b406ca9123e23a84f57c04d356db87488f332fe24ca2b17e1550 | c06b19bf39d9bb8c5d0cc873634a3c9e0f2f87f16d96cbeeda5ea6414daa4115 | null | [
"LICENSE"
] | 0 |
2.4 | pulumi-std | 2.4.0a1771652481 | Standard library functions | # pulumi-std
Standard library functions implemented as a native Pulumi provider.
### [Function List](FUNCTION_LIST.md)
### Build
```
make build
```
### Test
```
make test
```
### Generate inferred schema
```
make gen_schema
```
### Installation (SDKs to be published)
The Pulumi String provider is available as a package in all Pulumi languages:
- JavaScript/TypeScript: [`@pulumi/std`](https://www.npmjs.com/package/@pulumi/std)
- Python: [`pulumi-std`](https://pypi.org/project/pulumi-std/)
- Go: [`github.com/pulumi/pulumi-std/sdk/go`](https://pkg.go.dev/github.com/pulumi/pulumi-std/sdk/go)
- .NET: [`Pulumi.Std`](https://www.nuget.org/packages/Pulumi.std)
- YAML: `pulumi plugin install resource std`
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.165.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://github.com/pulumi/pulumi-std",
"Repository, https://github.com/pulumi/pulumi-std"
] | twine/5.0.0 CPython/3.11.8 | 2026-02-21T05:52:10.808992 | pulumi_std-2.4.0a1771652481.tar.gz | 26,574 | 69/c7/fc1fa89439a750b3d2ef7815d45de188c4ad7dc846ddbc7acfd0c2473fe6/pulumi_std-2.4.0a1771652481.tar.gz | source | sdist | null | false | 7d4f135d7b516f7cf8d2587726e8b1ef | 5d620ae7d4a63f2b4da6b6a7ecb517bec159641eac98dab2f41aee5a7a151717 | 69c7fc1fa89439a750b3d2ef7815d45de188c4ad7dc846ddbc7acfd0c2473fe6 | null | [] | 185 |
2.4 | urlpattern | 0.1.10 | An implementation of the URL Pattern Standard for Python written in Rust. | # URL Pattern
[](https://pypi.org/project/urlpattern/)
[](https://pypi.org/project/urlpattern/)
[](https://github.com/astral-sh/ruff)
[](https://github.com/urlpattern/python-urlpattern/actions)
An implementation of [the URL Pattern Standard](https://urlpattern.spec.whatwg.org/) for Python written in Rust.
## Introduction
The URL Pattern Standard is a web standard for URL pattern matching. It is useful on the server side when serving different pages based on the URL (a.k.a. routing). It provides pattern matching syntax like `/users/:id`, similar to [route parameters in Express](https://expressjs.com/en/guide/routing.html#route-parameters) or [Path-to-RegExp](https://github.com/pillarjs/path-to-regexp). You can use it as a foundation to build your own web server or framework.
It's a thin wrapper of [denoland/rust-urlpattern](https://github.com/denoland/rust-urlpattern) with [PyO3](https://github.com/PyO3/pyo3) + [Maturin](https://github.com/PyO3/maturin).
The naming conventions follow [the standard](https://urlpattern.spec.whatwg.org/) as closely as possible, similar to [xml.dom](https://docs.python.org/3/library/xml.dom.html).
## Installation
On Linux/UNIX or macOS:
```sh
pip install urlpattern
```
On Windows:
```sh
py -m pip install urlpattern
```
## Usage
Check [urlpattern.pyi](https://github.com/urlpattern/python-urlpattern/blob/main/urlpattern.pyi) or the examples below.
For various usage examples, you can also check [Chrome for Developers](https://developer.chrome.com/docs/web-platform/urlpattern) or [MDN](https://developer.mozilla.org/en-US/docs/Web/API/URL_Pattern_API) (you need to convert JavaScript into Python).
## Examples
### `test`
```py
from urlpattern import URLPattern
pattern = URLPattern("https://example.com/admin/*")
print(pattern.test("https://example.com/admin/main/")) # output: True
print(pattern.test("https://example.com/main/")) # output: False
```
### `exec`
```py
from urlpattern import URLPattern
pattern = URLPattern({"pathname": "/users/:id/"})
result = pattern.exec({"pathname": "/users/4163/"})
print(result["pathname"]["groups"]["id"]) # output: 4163
```
### `baseURL`
```py
from urlpattern import URLPattern
pattern = URLPattern("b", "https://example.com/a/")
print(pattern.test("a/b", "https://example.com/")) # output: True
print(pattern.test("b", "https://example.com/a/")) # output: True
print(
pattern.test({"pathname": "b", "baseURL": "https://example.com/a/"})
) # output: True
```
### `ignoreCase`
```py
from urlpattern import URLPattern
pattern = URLPattern("https://example.com/test")
print(pattern.test("https://example.com/test")) # output: True
print(pattern.test("https://example.com/TeST")) # output: False
pattern = URLPattern("https://example.com/test", {"ignoreCase": True})
print(pattern.test("https://example.com/test")) # output: True
print(pattern.test("https://example.com/TeST")) # output: True
```
### A simple WSGI app
```py
from wsgiref.simple_server import make_server
from urlpattern import URLPattern
user_id_pattern = URLPattern({"pathname": "/users/:id"})
def get_user_id(environ, start_response):
user_id = environ["result"]["pathname"]["groups"]["id"]
status = "200 OK"
response_headers = [("Content-type", "text/plain; charset=utf-8")]
start_response(status, response_headers)
return [f"{user_id=}".encode()]
def app(environ, start_response):
path = environ["PATH_INFO"]
method = environ["REQUEST_METHOD"]
if result := user_id_pattern.exec({"pathname": path}):
if method == "GET":
return get_user_id(environ | {"result": result}, start_response)
status = "404 Not Found"
response_headers = [("Content-type", "text/plain; charset=utf-8")]
start_response(status, response_headers)
return [b"Not Found"]
with make_server("", 8000, app) as httpd:
httpd.serve_forever()
```
## Limitations
Due to limitations in the dependency [denoland/rust-urlpattern](https://github.com/denoland/rust-urlpattern), it may not support all features specified in [the standard](https://urlpattern.spec.whatwg.org/).
Check `pytest.skip` in [`tests/test_lib.py`](https://github.com/urlpattern/python-urlpattern/blob/main/tests/test_lib.py).
| text/markdown; charset=UTF-8; variant=GFM | null | "방성범 (Bang Seongbeom)" <bangseongbeom@gmail.com> | null | null | null | urlpattern | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: P... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/urlpattern/python-urlpattern",
"Issues, https://github.com/urlpattern/python-urlpattern/issues",
"Repository, https://github.com/urlpattern/python-urlpattern.git"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T05:51:34.103869 | urlpattern-0.1.10-cp312-cp312-macosx_11_0_arm64.whl | 1,035,370 | 0a/a7/0686fa4ab2c491887c8f787798b6abf2e5d1159a2d2e8fa942d9bad79dd7/urlpattern-0.1.10-cp312-cp312-macosx_11_0_arm64.whl | cp312 | bdist_wheel | null | false | 6cc4a975528331787e6802d52d58cd9d | 8b906d63b1151080c7b1b31ae01b90cb60662d389f1d70535b208ea030321cba | 0aa70686fa4ab2c491887c8f787798b6abf2e5d1159a2d2e8fa942d9bad79dd7 | null | [
"LICENSE"
] | 5,837 |
2.4 | applypilot | 0.3.0 | AI-powered end-to-end job application pipeline | <!-- logo here -->
# ApplyPilot
**Applied to 1,000 jobs in 2 days. Fully autonomous. Open source.**
[](https://pypi.org/project/applypilot/)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/Pickle-Pixel/ApplyPilot)
[](https://ko-fi.com/S6S01UL5IO)
https://github.com/user-attachments/assets/7ee3417f-43d4-4245-9952-35df1e77f2df
---
## What It Does
ApplyPilot is a 6-stage autonomous job application pipeline. It discovers jobs across 5+ boards, scores them against your resume with AI, tailors your resume per job, writes cover letters, and **submits applications for you**. It navigates forms, uploads documents, answers screening questions, all hands-free.
Three commands. That's it.
```bash
pip install applypilot
pip install --no-deps python-jobspy && pip install pydantic tls-client requests markdownify regex
applypilot init # one-time setup: resume, profile, preferences, API keys
applypilot doctor # verify your setup — shows what's installed and what's missing
applypilot run # discover > enrich > score > tailor > cover letters
applypilot run -w 4 # same but parallel (4 threads for discovery/enrichment)
applypilot apply # autonomous browser-driven submission
applypilot apply -w 3 # parallel apply (3 Chrome instances)
applypilot apply --dry-run # fill forms without submitting
```
> **Why two install commands?** `python-jobspy` pins an exact numpy version in its metadata that conflicts with pip's resolver, but works fine at runtime with any modern numpy. The `--no-deps` flag bypasses the resolver; the second command installs jobspy's actual runtime dependencies. Everything except `python-jobspy` installs normally.
---
## Two Paths
### Full Pipeline (recommended)
**Requires:** Python 3.11+, Node.js (for npx), Gemini API key (free), Claude Code CLI, Chrome
Runs all 6 stages, from job discovery to autonomous application submission. This is the full power of ApplyPilot.
### Discovery + Tailoring Only
**Requires:** Python 3.11+, Gemini API key (free)
Runs stages 1-5: discovers jobs, scores them, tailors your resume, generates cover letters. You submit applications manually with the AI-prepared materials.
---
## The Pipeline
| Stage | What Happens |
|-------|-------------|
| **1. Discover** | Scrapes 5 job boards (Indeed, LinkedIn, Glassdoor, ZipRecruiter, Google Jobs) + 48 Workday employer portals + 30 direct career sites |
| **2. Enrich** | Fetches full job descriptions via JSON-LD, CSS selectors, or AI-powered extraction |
| **3. Score** | AI rates every job 1-10 based on your resume and preferences. Only high-fit jobs proceed |
| **4. Tailor** | AI rewrites your resume per job: reorganizes, emphasizes relevant experience, adds keywords. Never fabricates |
| **5. Cover Letter** | AI generates a targeted cover letter per job |
| **6. Auto-Apply** | Claude Code navigates application forms, fills fields, uploads documents, answers questions, and submits |
Each stage is independent. Run them all or pick what you need.
---
## ApplyPilot vs The Alternatives
| Feature | ApplyPilot | AIHawk | Manual |
|---------|-----------|--------|--------|
| Job discovery | 5 boards + Workday + direct sites | LinkedIn only | One board at a time |
| AI scoring | 1-10 fit score per job | Basic filtering | Your gut feeling |
| Resume tailoring | Per-job AI rewrite | Template-based | Hours per application |
| Auto-apply | Full form navigation + submission | LinkedIn Easy Apply only | Click, type, repeat |
| Supported sites | Indeed, LinkedIn, Glassdoor, ZipRecruiter, Google Jobs, 46 Workday portals, 28 direct sites | LinkedIn | Whatever you open |
| License | AGPL-3.0 | MIT | N/A |
---
## Requirements
| Component | Required For | Details |
|-----------|-------------|---------|
| Python 3.11+ | Everything | Core runtime |
| Node.js 18+ | Auto-apply | Needed for `npx` to run Playwright MCP server |
| Gemini API key | Scoring, tailoring, cover letters | Free tier (15 RPM / 1M tokens/day) is enough |
| Chrome/Chromium | Auto-apply | Auto-detected on most systems |
| Claude Code CLI | Auto-apply | Install from [claude.ai/code](https://claude.ai/code) |
**Gemini API key is free.** Get one at [aistudio.google.com](https://aistudio.google.com). OpenAI and local models (Ollama/llama.cpp) are also supported.
### Optional
| Component | What It Does |
|-----------|-------------|
| CapSolver API key | Solves CAPTCHAs during auto-apply (hCaptcha, reCAPTCHA, Turnstile, FunCaptcha). Without it, CAPTCHA-blocked applications just fail gracefully |
> **Note:** python-jobspy is installed separately with `--no-deps` because it pins an exact numpy version in its metadata that conflicts with pip's resolver. It works fine with modern numpy at runtime.
---
## Configuration
All generated by `applypilot init`:
### `profile.json`
Your personal data in one structured file: contact info, work authorization, compensation, experience, skills, resume facts (preserved during tailoring), and EEO defaults. Powers scoring, tailoring, and form auto-fill.
### `searches.yaml`
Job search queries, target titles, locations, boards. Run multiple searches with different parameters.
### `.env`
API keys and runtime config: `GEMINI_API_KEY`, `LLM_MODEL`, `CAPSOLVER_API_KEY` (optional).
### Package configs (shipped with ApplyPilot)
- `config/employers.yaml` - Workday employer registry (48 preconfigured)
- `config/sites.yaml` - Direct career sites (30+), blocked sites, base URLs, manual ATS domains
- `config/searches.example.yaml` - Example search configuration
---
## How Stages Work
### Discover
Queries Indeed, LinkedIn, Glassdoor, ZipRecruiter, Google Jobs via JobSpy. Scrapes 48 Workday employer portals (configurable in `employers.yaml`). Hits 30 direct career sites with custom extractors. Deduplicates by URL.
### Enrich
Visits each job URL and extracts the full description. 3-tier cascade: JSON-LD structured data, then CSS selector patterns, then AI-powered extraction for unknown layouts.
### Score
AI scores every job 1-10 against your profile. 9-10 = strong match, 7-8 = good, 5-6 = moderate, 1-4 = skip. Only jobs above your threshold proceed to tailoring.
### Tailor
Generates a custom resume per job: reorders experience, emphasizes relevant skills, incorporates keywords from the job description. Your `resume_facts` (companies, projects, metrics) are preserved exactly. The AI reorganizes but never fabricates.
### Cover Letter
Writes a targeted cover letter per job referencing the specific company, role, and how your experience maps to their requirements.
### Auto-Apply
Claude Code launches a Chrome instance, navigates to each application page, detects the form type, fills personal information and work history, uploads the tailored resume and cover letter, answers screening questions with AI, and submits. A live dashboard shows progress in real-time.
The Playwright MCP server is configured automatically at runtime per worker. No manual MCP setup needed.
```bash
# Utility modes (no Chrome/Claude needed)
applypilot apply --mark-applied URL # manually mark a job as applied
applypilot apply --mark-failed URL # manually mark a job as failed
applypilot apply --reset-failed # reset all failed jobs for retry
applypilot apply --gen --url URL # generate prompt file for manual debugging
```
---
## CLI Reference
```
applypilot init # First-time setup wizard
applypilot doctor # Verify setup, diagnose missing requirements
applypilot run [stages...] # Run pipeline stages (or 'all')
applypilot run --workers 4 # Parallel discovery/enrichment
applypilot run --stream # Concurrent stages (streaming mode)
applypilot run --min-score 8 # Override score threshold
applypilot run --dry-run # Preview without executing
applypilot run --validation lenient # Relax validation (recommended for Gemini free tier)
applypilot run --validation strict # Strictest validation (retries on any banned word)
applypilot apply # Launch auto-apply
applypilot apply --workers 3 # Parallel browser workers
applypilot apply --dry-run # Fill forms without submitting
applypilot apply --continuous # Run forever, polling for new jobs
applypilot apply --headless # Headless browser mode
applypilot apply --url URL # Apply to a specific job
applypilot status # Pipeline statistics
applypilot dashboard # Open HTML results dashboard
```
---
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for development setup, coding standards, and PR guidelines.
---
## License
ApplyPilot is licensed under the [GNU Affero General Public License v3.0](LICENSE).
You are free to use, modify, and distribute this software. If you deploy a modified version as a service, you must release your source code under the same license.
| text/markdown | Pickle-Pixel | null | null | null | null | ai, apply, automation, job-application, job-search, resume | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Office/Business"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"beautifulsoup4>=4.12",
"httpx>=0.24",
"pandas>=2.0",
"playwright>=1.40",
"python-dotenv>=1.0",
"pyyaml>=6.0",
"rich>=13.0",
"typer>=0.9.0",
"pytest>=7.0; extra == \"dev\"",
"ruff>=0.1; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Pickle-Pixel/ApplyPilot",
"Repository, https://github.com/Pickle-Pixel/ApplyPilot",
"Issues, https://github.com/Pickle-Pixel/ApplyPilot/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:51:26.415393 | applypilot-0.3.0.tar.gz | 123,469 | ae/4c/9ceb1c33389cee1d764167e87d93cdcfca3cb31c382be402f24ec4cdc690/applypilot-0.3.0.tar.gz | source | sdist | null | false | cba6a0a94b016d5333f5088778248c45 | 2ac4da6681ed0fc8c633f54c3565dd2fa25103994b0db3418700d2a8795a550e | ae4c9ceb1c33389cee1d764167e87d93cdcfca3cb31c382be402f24ec4cdc690 | AGPL-3.0-only | [
"LICENSE"
] | 283 |
2.4 | knowledge-fidelity | 0.1.1 | Compress LLMs while auditing whether they still know truth vs myths. SVD compression + false-belief detection in one toolkit. | # Knowledge Fidelity
**Compress an LLM while auditing whether it still knows truth vs popular myths.**
The first toolkit that uses the same factual probes for both structural importance scoring (SVD compression) and behavioral false-belief detection (confidence cartography). One call to compress and audit:
```python
from knowledge_fidelity import compress_and_audit
report = compress_and_audit("Qwen/Qwen2.5-7B-Instruct", ratio=0.7)
print(f"Retention: {report['retention']:.0%} | "
f"False-belief signal: rho={report['rho_after']:.3f}")
# Retention: 100% | False-belief signal: rho=0.725
```
## Why This Exists
LLM compression is everywhere. Knowledge auditing is rare. Nobody checks both at once.
When you quantize or prune a model, you run HellaSwag and call it a day. But benchmarks don't tell you whether the model now thinks the Berenstain Bears are spelled "Berenstein" or that vaccines cause autism. **Knowledge Fidelity does.**
Two sensors, one toolkit:
| Sensor | What it measures | How |
|--------|-----------------|-----|
| **Structural** (SVD) | Which weights encode facts | Gradient importance on factual probes |
| **Behavioral** (Confidence) | Whether the model believes truth vs myths | Teacher-forced probability on true/false pairs |
The key insight: the same set of factual probes drives both. Compress with awareness of what matters, then verify nothing broke.
## Early Results (v0.1)
All results below are from the unified toolkit run on Apple Silicon (M3 Ultra, CPU).
### Multi-Seed CF90 Validation (70% rank, 3 seeds)
| Metric | Qwen2.5-0.5B | Qwen2.5-7B-Instruct |
|--------|:------------:|:-------------------:|
| Retention | **95%** ± 0% | **100%** ± 0% |
| rho before | 0.821 | 0.746 |
| rho after | 0.720 | 0.725 |
| rho drop | 0.101 ± 0.000 | **0.021** ± 0.000 |
| Matrices compressed | 72 | 84 |
| Layers frozen | 18/24 | 21/28 |
The 7B model loses only 0.021 rho under CF90 — nearly perfect fidelity at scale.
### Joint Ablation: Compression Ratio vs Confidence (Qwen2.5-0.5B)
| Ratio | Default rho | Mandela rho | Medical rho |
|:-----:|:-----------:|:-----------:|:-----------:|
| 50% | 0.821 → 0.761 | 0.257 → 0.714 | 0.100 → 0.700 |
| 60% | 0.821 → 0.714 | 0.257 → 0.771 | 0.100 → 0.900 |
| 70% | 0.821 → 0.720 | 0.257 → 0.771 | 0.100 → 0.100 |
| 80% | 0.821 → 0.690 | 0.257 → 0.257 | 0.100 → 0.600 |
| 90% | 0.821 → 0.821 | 0.257 → 0.371 | 0.100 → 0.100 |
| 100% | 0.821 → 0.821 | 0.257 → 0.257 | 0.100 → 0.100 |
### Joint Ablation: Compression Ratio vs Confidence (Qwen2.5-7B-Instruct)
| Ratio | Default rho | Mandela rho | Medical rho |
|:-----:|:-----------:|:-----------:|:-----------:|
| 50% | 0.746 → 0.689 | 0.829 → 0.771 | −0.700 → 0.600 |
| 70% | 0.746 → 0.725 | 0.829 → **0.943** | −0.700 → −0.600 |
| 90% | 0.746 → 0.713 | 0.829 → **0.943** | −0.700 → −0.900 |
| 100% | 0.746 → 0.746 | 0.829 → 0.829 | −0.700 → −0.700 |
### SVD as a Denoiser
A surprising finding at 7B scale: **SVD compression can _improve_ the Mandela effect signal.** At 70% and 90% rank, Mandela rho increases from 0.829 to 0.943 — the compressed model discriminates true from false memories _better_ than the original.
This is consistent with the interpretation that truncated SVD strips noise from attention projections while preserving the principal signal directions that encode factual knowledge. On small probe sets (6 Mandela, 5 medical), removing noise can sharpen the true/false separation. The effect is weaker at 0.5B where the baseline Mandela signal is already noisy (rho=0.257).
This has practical implications: moderate CF90 compression may serve as a **denoising regularizer** for factual knowledge, not just a lossy compression step.
### Scale-Dependent Findings
| Finding | 0.5B | 7B |
|---------|:----:|:--:|
| Mandela baseline rho | 0.257 (weak) | **0.829** (strong) |
| CF90 rho drop | 0.101 (moderate) | **0.021** (minimal) |
| CF90 retention | 95% | **100%** |
| SVD denoising on Mandela | Mixed | **+0.114 rho** |
The Mandela effect signal strengthens dramatically with scale (3.2× from 0.5B to 7B), and CF90 compression becomes safer at larger scales.
### Prior Results (from Component Projects)
These findings come from the standalone [intelligent-svd](https://github.com/SolomonB14D3/intelligent-svd) and [confidence-cartography](https://github.com/SolomonB14D3/confidence-cartography) projects that this toolkit unifies:
| Finding | Result |
|---------|--------|
| Confidence correlates with human false-belief prevalence | rho=0.652, p=0.016 (Pythia 160M–12B) |
| Out-of-domain medical claims | 88% accuracy at 6.9B |
| Targeted resampling at low-confidence tokens | Outperforms uniform best-of-N |
| CF90 + INT8 stacking | 72–77% retention (Qwen-0.5B, Llama-7B) |
| Importance-guided SVD at 50% rank | 3× better retention than standard SVD |
### Compression Safety Guide
| Layer Type | Safe to Compress | Notes |
|------------|------------------|-------|
| **Q, K, O projections** | Yes at 70% rank | Main target |
| **V projection** | 90–95% only | Marginal gains, high risk below 90% |
| **MLP layers** | **Never** | Destroys model at any compression level |
## Install
```bash
pip install knowledge-fidelity # Core (SVD + probes)
pip install "knowledge-fidelity[cartography]" # + confidence analysis + plots
pip install "knowledge-fidelity[full]" # Everything including MLX
```
Or from source:
```bash
git clone https://github.com/SolomonB14D3/knowledge-fidelity
cd knowledge-fidelity
pip install -e ".[full]"
```
## Quick Start
### One-Call Compress + Audit
```python
from knowledge_fidelity import compress_and_audit
report = compress_and_audit(
"Qwen/Qwen2.5-7B-Instruct",
ratio=0.7, # Keep 70% of singular values
freeze_ratio=0.75, # Freeze bottom 75% of layers
)
print(report["summary"])
# Compressed Qwen/Qwen2.5-7B-Instruct at 70% rank | 84 matrices | 21/28 frozen | Retention: 100% | rho: 0.746 -> 0.725
```
### Step-by-Step (More Control)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from knowledge_fidelity.svd import compress_qko, freeze_layers
from knowledge_fidelity import audit_model
# Load
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-7B-Instruct", torch_dtype=torch.float32)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
# Compress
compress_qko(model, ratio=0.7) # SVD on Q, K, O projections
freeze_layers(model, ratio=0.75) # Freeze bottom 75%
# Audit
audit = audit_model(model, tokenizer)
print(f"rho={audit['rho']:.3f}, {audit['n_positive_delta']}/{audit['n_probes']} probes positive")
# Fine-tune gently: 1 epoch, lr=1e-5
```
### Importance-Guided Compression (for Aggressive Ratios)
When compressing below 70%, standard SVD loses facts. The importance-guided variant uses gradient information to decide which singular values to keep:
```python
from knowledge_fidelity.svd import compress_qko_importance, compute_importance
importance = compute_importance(model, tokenizer) # Uses shared probes
compress_qko_importance(model, importance, ratio=0.5) # 3x better at 50%
```
### Confidence Analysis Only
```python
from knowledge_fidelity.cartography import analyze_confidence
# Teacher-forced: how confident is the model on each token?
record = analyze_confidence(
"The capital of France is Paris.",
model_name="EleutherAI/pythia-1.4b",
)
print(f"Mean confidence: {record.mean_top1_prob:.3f}")
print(f"Min confidence at: '{record.min_confidence_token}' "
f"(prob={record.min_confidence_value:.3f})")
```
### Custom Probes
```python
from knowledge_fidelity import compress_and_audit, load_probes
# Use domain-specific probes
medical_probes = load_probes("data/probes/medical_claims.json")
report = compress_and_audit("my-model", probes=medical_probes)
# Or inline
custom = [
{"text": "TCP uses a three-way handshake.",
"false": "TCP uses a two-way handshake.",
"domain": "networking", "id": "tcp_handshake"},
]
report = compress_and_audit("my-model", probes=custom)
```
## Built-In Probe Sets
| Set | Count | Purpose |
|-----|-------|---------|
| `get_default_probes()` | 20 | Geography, science, history, biology |
| `get_mandela_probes()` | 6 | Popular false memories (Berenstain Bears, Vader quote, etc.) |
| `get_medical_probes()` | 5 | Common medical misconceptions |
| `get_all_probes()` | 31 | All of the above |
Community contributions welcome — add probes for your domain and submit a PR.
## How It Works
### The CF90 Pipeline (Structural Sensor)
1. **Compress** Q, K, O attention projections at 70% rank via truncated SVD
2. **Freeze** 75% of layers from the bottom up
3. **Fine-tune gently** (1 epoch, lr=1e-5)
SVD removes noise from attention weight matrices while preserving signal directions important for factual knowledge. Freezing prevents catastrophic forgetting.
### Confidence Cartography (Behavioral Sensor)
For each token in a text, measure the probability the model assigns to it (teacher-forced). True statements get higher confidence than false ones. The ratio between true/false confidence is a behavioral signal for whether the model "believes" a fact.
### The Unification
Both use the same probes:
- **SVD importance scoring** runs forward+backward on probe texts to compute gradient magnitudes — which weights matter for encoding these facts
- **Confidence auditing** runs a forward pass on true vs false versions of the same probes — does the model assign higher probability to truth?
Compress with knowledge of what matters. Verify nothing was lost. Same probes, both sides.
## Experiments
```bash
# Quick demo (~5 min on Qwen-0.5B, ~8 min on 7B)
python examples/quick_demo.py
python examples/quick_demo.py --model Qwen/Qwen2.5-7B-Instruct
# Joint ablation: compression ratio vs confidence preservation
python experiments/joint_ablation.py --model Qwen/Qwen2.5-7B-Instruct
# Multi-seed CF90 validation
python experiments/run_cf90_multiseed.py --model Qwen/Qwen2.5-7B-Instruct --seeds 3
```
## Deployment
```bash
# Export to GGUF for llama.cpp / Ollama
python deployment/export_gguf.py --input compressed_model/ --output model.gguf --quantize q4_k_m
# Benchmark with vLLM
python deployment/vllm_benchmark.py --baseline Qwen/Qwen2.5-7B-Instruct --compressed ./compressed_model
```
See [`deployment/mlx_recipe.md`](deployment/mlx_recipe.md) for Apple Silicon inference with MLX.
## Platform Notes (Apple Silicon)
- Use **CPU** for compression and fine-tuning (MPS has matmul errors with some architectures and NaN gradients with frozen layers)
- Use **MLX** for fast inference after compression
- Set `HF_HOME` to external storage for large models
## Model Compatibility
Works on any HuggingFace causal LM with `model.model.layers[i].self_attn.{q,k,o}_proj` (standard for Qwen, Llama, Mistral) or `model.transformer.h` (GPT-2 style).
Validated on:
- **Qwen2.5**: 0.5B, 1.5B, 7B, 32B
- **Llama 2**: 7B
- Should work on Mistral, Phi, Gemma (same layer layout) — PRs with test results welcome
## Built On
This toolkit unifies two standalone research projects:
- [**Intelligent SVD**](https://github.com/SolomonB14D3/intelligent-svd) — CF90 compression method and safety rules
- [**Confidence Cartography**](https://github.com/SolomonB14D3/confidence-cartography) — False-belief detection via teacher-forced confidence
Both remain available as independent repos. Knowledge Fidelity combines their core ideas into a single pipeline with a shared probe system.
## Citation
```bibtex
@software{knowledge_fidelity,
author = {Bryan Sanchez},
title = {Knowledge Fidelity: Compress LLMs While Auditing What They Still Know},
year = {2026},
url = {https://github.com/SolomonB14D3/knowledge-fidelity}
}
```
## License
MIT
| text/markdown | Bryan Sanchez | null | null | null | null | llm, compression, svd, knowledge, false-beliefs, interpretability, confidence, mandela-effect, transformers, model-auditing, pytorch | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
... | [] | null | null | >=3.10 | [] | [] | [] | [
"torch>=2.0",
"transformers>=4.36",
"numpy",
"scipy",
"matplotlib>=3.7; extra == \"cartography\"",
"seaborn>=0.12; extra == \"cartography\"",
"tqdm; extra == \"cartography\"",
"lm-eval>=0.4; extra == \"eval\"",
"datasets; extra == \"eval\"",
"safetensors; extra == \"deploy\"",
"knowledge-fidelit... | [] | [] | [] | [
"Homepage, https://github.com/SolomonB14D3/knowledge-fidelity",
"Repository, https://github.com/SolomonB14D3/knowledge-fidelity",
"Issues, https://github.com/SolomonB14D3/knowledge-fidelity/issues"
] | twine/6.2.0 CPython/3.9.6 | 2026-02-21T05:50:43.680712 | knowledge_fidelity-0.1.1.tar.gz | 26,202 | 14/cf/eee09baf2caeac859ae16aae5b4be8afb9f97bea529c23cdf750b800ed2f/knowledge_fidelity-0.1.1.tar.gz | source | sdist | null | false | 71ade1e0d180f04c1128632be3ea5e1d | d3b0753f8fe30f7c6b7d5b167e20f5ee2660f3c9c1952576bdb6278d5fe5708d | 14cfeee09baf2caeac859ae16aae5b4be8afb9f97bea529c23cdf750b800ed2f | MIT | [
"LICENSE"
] | 217 |
2.4 | jpfs | 0.5.0 | Japan Fiscal Simulator - New Keynesian DSGEモデルによる財政政策分析 | # jpfs - Japan Fiscal Simulator
[](https://pypi.org/project/jpfs/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
消費税減税・社会保障費増額・補助金政策などの財政政策が日本経済に与える影響をシミュレートするツール。Smets-Wouters級の中規模New Keynesian DSGEモデルをPythonでフルスクラッチ実装。
## 特徴
- **14方程式の構造NKモデル**: 16変数(state 5 + control 9 + 財政・税ブロック)、7構造ショック
- **5部門経済**: 家計(習慣形成)、企業(Calvo価格・賃金硬直性)、政府、中央銀行、金融部門
- **QZ分解ベースBK/Klein解法**: `scipy.linalg.ordqz` による合理的期待均衡の一般解法
- **ベイズ推定**: Metropolis-Hastings MCMCサンプラー + カルマンフィルタ尤度計算
- **日本経済向けキャリブレーション**: 低金利環境、高債務水準、消費税10%
- **MCPサーバー**: Claude Desktopとの連携
- **CLI**: コマンドラインからのシミュレーション実行
## インストール
```bash
pip install jpfs
```
または
```bash
uvx jpfs --help
```
## 使用方法
### CLI
```bash
# シミュレーション実行
jpfs simulate consumption_tax --shock -0.02 --periods 40 --graph
jpfs simulate price_markup --shock 0.01 --periods 40
# 財政乗数計算
jpfs multiplier government_spending --horizon 8
# 定常状態表示
jpfs steady-state
# パラメータ表示
jpfs parameters
# MCPサーバー起動
jpfs mcp
```
### Pythonからの使用
```python
import japan_fiscal_simulator as jpfs
# モデル初期化
calibration = jpfs.JapanCalibration.create()
model = jpfs.DSGEModel(calibration.parameters)
# 定常状態
ss = model.steady_state
print(f"産出: {ss.output:.4f}")
print(f"消費: {ss.consumption:.4f}")
# シミュレーション
simulator = jpfs.ImpulseResponseSimulator(model)
result = simulator.simulate_consumption_tax_cut(tax_cut=0.02, periods=40)
# 結果
y_response = result.get_response("y")
print(f"産出ピーク効果: {max(y_response) * 100:.2f}%")
```
### MCP連携
Claude Desktopの設定ファイル(`claude_desktop_config.json`)に追加:
```json
{
"mcpServers": {
"jpfs": {
"command": "jpfs",
"args": ["mcp"]
}
}
}
```
## ドキュメント
| ドキュメント | 内容 |
|------------|------|
| [Getting Started](docs/getting-started.md) | インストールから最初のシミュレーションまで |
| [CLI リファレンス](docs/cli.md) | 全コマンドの使い方とオプション |
| [Python API](docs/python-api.md) | Pythonライブラリとしての利用方法 |
| [ベイズ推定](docs/estimation.md) | MH-MCMCによるパラメータ推定 |
| [MCP サーバー](docs/mcp.md) | Claude Desktopとの連携設定 |
| [数理モデル仕様書](docs/MATHEMATICAL_SPECIFICATION.md) | 全14方程式・解法・推定手法の数学的記述 |
## モデル概要
### 家計部門
- 異時点間効用最大化(消費のオイラー方程式)
- 習慣形成(外部習慣)
- 労働供給の内生化(限界代替率)
### 企業部門
- Calvo型価格硬直性 + 価格インデクセーション
- Calvo型賃金硬直性 + 賃金マークアップショック
- Cobb-Douglas生産関数、実質限界費用の明示化
- 資本蓄積(投資調整コスト、Tobin's Q)
### 政府部門
- 消費税・所得税・資本所得税
- 政府支出・移転支払い
- 財政ルール(債務安定化)
### 中央銀行
- テイラールール
- 金利平滑化
- ゼロ金利下限(ZLB)考慮
### 金融部門
- 金融加速器(BGG型簡略版)
- 外部資金プレミアム
- リスクプレミアムショック
### 構造ショック(7種類)
技術(TFP)、リスクプレミアム、投資固有技術、賃金マークアップ、価格マークアップ、政府支出、金融政策
## パラメータ
主要パラメータ(日本キャリブレーション):
| パラメータ | 値 | 説明 |
|-----------|-----|------|
| β | 0.999 | 割引率(低金利環境) |
| τ_c | 0.10 | 消費税率(10%) |
| B/Y | 2.00 | 政府債務/GDP比率 |
| ρ_R | 0.85 | 金利平滑化 |
| θ | 0.75 | Calvo価格硬直性 |
## 出力形式
### シミュレーション結果(JSON)
```json
{
"scenario": {
"name": "消費税2%pt減税",
"policy_type": "consumption_tax",
"shock_size": -0.02
},
"impulse_response": {
"y": {"values": [...]},
"c": {"values": [...]},
"pi": {"values": [...]}
},
"fiscal_multiplier": {
"impact_multiplier": 0.85,
"cumulative_multiplier_4q": 1.12
}
}
```
## 開発
```bash
# 依存関係インストール
uv sync
# テスト実行
uv run pytest
# 型チェック(strictモード)
uv run mypy src/japan_fiscal_simulator
# Lint & フォーマット
uv run ruff check src tests
uv run ruff format src tests
```
## 今後の拡張候補(Phase 6: 日本固有の拡張)
### モデル拡張
- **ZLB制約の明示的モデル化**: ゼロ金利下限での非線形ダイナミクス
- **高債務経済**: リカーディアン/非リカーディアン家計の混合、財政持続可能性条件
- **人口動態**: OLG要素の導入、労働力人口減少のトレンド
- **開放経済拡張**: 為替レート、輸出入、海外金利の導入
- **金融加速器の本格実装**: BGG型の完全版(現在は簡略化)
### インターフェース
- **Web UI**: Streamlit/Gradioによるインタラクティブダッシュボード
- **API サーバー**: FastAPIによるREST API提供
## ライセンス
MIT License
## 参考文献
- Smets, F., & Wouters, R. (2007). Shocks and frictions in US business cycles: A Bayesian DSGE approach.
- Bernanke, B. S., Gertler, M., & Gilchrist, S. (1999). The financial accelerator in a quantitative business cycle framework.
- Blanchard, O. J., & Kahn, C. M. (1980). The solution of linear difference models under rational expectations.
| text/markdown | Japan Fiscal Team | null | null | null | MIT | dsge, economics, fiscal-policy, japan, simulation | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: Japanese",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.13",
"Programming Language :: ... | [] | null | null | >=3.13 | [] | [] | [] | [
"jinja2>=3.1.0",
"matplotlib>=3.7.0",
"mcp>=1.0.0",
"numpy>=1.24.0",
"pandas>=2.0.0",
"pydantic>=2.0.0",
"rich>=13.0.0",
"scipy>=1.10.0",
"typer>=0.9.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""... | [] | [] | [] | [
"Homepage, https://github.com/DaisukeYoda/japan-fiscal-simulator",
"Repository, https://github.com/DaisukeYoda/japan-fiscal-simulator",
"Documentation, https://github.com/DaisukeYoda/japan-fiscal-simulator/tree/main/docs",
"Issues, https://github.com/DaisukeYoda/japan-fiscal-simulator/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:50:25.032953 | jpfs-0.5.0.tar.gz | 242,468 | bc/b3/bd45f503836d18148a3895988a5726dbe03c3ffa8deab5273cce8fa2aef4/jpfs-0.5.0.tar.gz | source | sdist | null | false | 0913440b5f6d1d38fd51442070d72630 | 994830f4db4fb256720311ec7ea8c35b3cbb92f18e9325c10bc9f940b755c401 | bcb3bd45f503836d18148a3895988a5726dbe03c3ffa8deab5273cce8fa2aef4 | null | [
"LICENSE"
] | 205 |
2.4 | agent-budget-guard | 0.2.0 | Budget-limited LLM API client wrapper to prevent runaway AI agent costs | # Agent Budget Guard
Hard spending limits for LLM API calls. Prevents runaway agent costs.
Wraps **OpenAI**, **Anthropic**, and **Google Gemini** — drop-in replacement for each SDK client with budget enforcement and no other changes to your code.
## Install
```bash
pip install agent-budget-guard
```
## Quickstart
### OpenAI
```python
from agent_budget_guard import BudgetedSession
client = BudgetedSession.openai(budget_usd=5.00)
# Non-streaming — identical to normal OpenAI usage
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)
# Streaming — works the same way, cost tracked from final chunk
for chunk in client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
stream=True,
):
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
print(client.session.get_summary())
```
### Anthropic
```python
client = BudgetedSession.anthropic(budget_usd=5.00)
# Non-streaming
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
)
print(response.content[0].text)
# Streaming
for event in client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}],
stream=True,
):
if event.type == "content_block_delta":
print(event.delta.text, end="")
```
### Google Gemini
```python
client = BudgetedSession.google(budget_usd=5.00)
# Non-streaming
response = client.models.generate_content(
model="gemini-2.0-flash",
contents="Hello",
)
print(response.text)
# Streaming — Google uses a separate method (mirrors the underlying SDK)
for chunk in client.models.generate_content_stream(
model="gemini-2.0-flash",
contents="Hello",
):
print(chunk.text, end="")
```
## API Keys
Set the standard environment variable for each provider:
```bash
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...
export GOOGLE_API_KEY=...
```
Or pass `api_key=` directly to any factory method.
## Manual Wrapping
If you already have a client instance, wrap it directly:
```python
from openai import OpenAI
from agent_budget_guard import BudgetedSession
session = BudgetedSession(budget_usd=5.00)
client = session.wrap_openai(OpenAI())
```
Same pattern for `wrap_anthropic()` and `wrap_google()`.
## Callbacks
```python
client = BudgetedSession.openai(
budget_usd=5.00,
on_budget_exceeded=lambda e: print(f"Budget hit: {e}"),
on_warning=lambda w: print(f"{w['threshold']}% of budget used"),
warning_thresholds=[50, 90], # default: [30, 80, 95]
)
```
**`on_budget_exceeded`** — called when a request would exceed the budget. The call returns `None` instead of raising. Without this callback, a `BudgetExceededError` is raised.
**`on_warning`** — called when utilization crosses a threshold. Each threshold fires once per session. The callback receives:
```python
{
"threshold": 50, # which % threshold was crossed
"spent": 2.51, # total spent so far
"remaining": 2.49, # budget left
"budget": 5.00 # total budget
}
```
## Concurrent Agents
All agents share the same budget pool with atomic reservation — no race conditions.
```python
import concurrent.futures
from agent_budget_guard import BudgetedSession, BudgetExceededError
client = BudgetedSession.openai(budget_usd=10.00)
def agent_task(task_id):
for _ in range(10):
try:
client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": f"Task {task_id}"}]
)
except BudgetExceededError:
return
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
executor.map(agent_task, range(5))
```
## How It Works
1. Estimates cost before the API call (token counting + model pricing)
2. Atomically reserves that amount from the budget
3. Makes the API call only if within budget
4. Calculates actual cost from the response (or final stream chunk)
5. Commits actual cost, releases the reservation
6. Fires warning callbacks if thresholds are crossed
`spent + reserved <= budget` at all times, even under concurrency.
## Session API
```python
client.session.get_total_spent() # USD spent so far
client.session.get_remaining_budget() # USD remaining (accounts for in-flight calls)
client.session.get_reserved() # USD reserved for in-flight calls
client.session.get_budget() # total budget
client.session.get_summary() # dict with all of the above
client.session.reset() # reset to zero (don't use mid-flight)
```
## Supported Models
**OpenAI** — GPT-5.2, GPT-5.1, GPT-5-mini, GPT-5-nano, GPT-4.1, GPT-4.1-mini, GPT-4.1-nano, GPT-4o, GPT-4o-mini, o1, o1-pro, o3, o3-pro, o4-mini, gpt-4-turbo, gpt-4, gpt-3.5-turbo
Batch pricing: `BudgetedSession.openai(budget_usd=5.00, tier="batch")`
**Anthropic** — claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5, claude-3-5-sonnet, claude-3-5-haiku, claude-3-opus, claude-3-sonnet, claude-3-haiku
**Google Gemini** — gemini-2.0-flash, gemini-2.0-flash-lite, gemini-2.0-pro, gemini-1.5-pro, gemini-1.5-flash, gemini-1.5-flash-8b
## Development
```bash
git clone https://github.com/Digital-Ibraheem/agent-budget-guard.git
cd agent-budget-guard
pip install -e ".[dev]"
pytest
```
| text/markdown | Agent Budget Contributors | null | null | null | MIT | ai-safety, anthropic, api-wrapper, budget, cost-control, gemini, google, llm, openai | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.8 | [] | [] | [] | [
"anthropic>=0.25.0",
"google-genai>=1.0.0",
"openai>=1.0.0",
"tiktoken>=0.5.0",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=7.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
... | [] | [] | [] | [
"Homepage, https://github.com/Digital-Ibraheem/agent-budget-guard",
"Repository, https://github.com/Digital-Ibraheem/agent-budget-guard",
"Issues, https://github.com/Digital-Ibraheem/agent-budget-guard/issues"
] | twine/6.2.0 CPython/3.14.3 | 2026-02-21T05:49:56.320965 | agent_budget_guard-0.2.0.tar.gz | 30,280 | 45/6b/8fe13fdd240f136c2802272422a6e48a79f8fbb2613b8183b0b4831a1a6e/agent_budget_guard-0.2.0.tar.gz | source | sdist | null | false | 9fc47b4a67bcb0a43291a5eac49f5272 | a97f2680057e777cf47a603d7caee0614ac416188c6414e8cfe08ee21c07ffcf | 456b8fe13fdd240f136c2802272422a6e48a79f8fbb2613b8183b0b4831a1a6e | null | [
"LICENSE"
] | 212 |
2.4 | opendp | 0.14.1a20260220001 | Python bindings for the OpenDP Library | <img src="https://docs.opendp.org/en/stable/_static/opendp-logo.png" width="200" alt="OpenDP logo">
[](https://www.repostatus.org/#wip)
[](https://opensource.org/license/MIT)
[](https://docs.opendp.org/en/stable/api/python/index.html)
[](https://docs.opendp.org/en/stable/api/r/)
[](https://docs.rs/crate/opendp/latest)
[](https://github.com/opendp/opendp/actions/workflows/smoke-test.yml?query=branch%3Amain)
[](https://github.com/opendp/opendp/actions/workflows/nightly.yml?query=branch%3Amain)
[](https://github.com/opendp/opendp/actions/workflows/weekly-doc-check.yml?query=branch%3Amain)
The OpenDP Library is a modular collection of statistical algorithms that adhere to the definition of
[differential privacy](https://en.wikipedia.org/wiki/Differential_privacy).
It can be used to build applications of privacy-preserving computations, using a number of different models of privacy.
OpenDP is implemented in Rust, with bindings for easy use from Python and R.
The architecture of the OpenDP Library is based on a conceptual framework for expressing privacy-aware computations.
This framework is described in the paper [A Programming Framework for OpenDP](https://opendp.org/files/2025/11/opendp_programming_framework_11may2020_1_01.pdf).
The OpenDP Library is part of the larger [OpenDP Project](https://opendp.org), a community effort to build trustworthy,
open source software tools for analysis of private data.
(For simplicity in these docs, when we refer to “OpenDP,” we mean just the library, not the entire project.)
## Status
OpenDP is under development, and we expect to [release new versions](https://github.com/opendp/opendp/releases) frequently,
incorporating feedback and code contributions from the OpenDP Community.
It's a work in progress, but it can already be used to build some applications and to prototype contributions that will expand its functionality.
We welcome you to try it and look forward to feedback on the library! However, please be aware of the following limitations:
> OpenDP, like all real-world software, has both known and unknown issues.
> If you intend to use OpenDP for a privacy-critical application, you should evaluate the impact of these issues on your use case.
>
> More details can be found in the [Limitations section of the User Guide](https://docs.opendp.org/en/stable/api/user-guide/limitations.html).
## Installation
Install OpenDP for Python with `pip` (the [package installer for Python](https://pypi.org/project/pip/)):
$ pip install opendp
Install OpenDP for R from an R session:
install.packages("opendp", repos = "https://opendp.r-universe.dev")
More information can be found in the [Getting Started section of the User Guide](https://docs.opendp.org/en/stable/getting-started/).
## Documentation
The full documentation for OpenDP is located at https://docs.opendp.org. Here are some helpful entry points:
* [User Guide](https://docs.opendp.org/en/stable/api/user-guide/index.html)
* [Python API Docs](https://docs.opendp.org/en/stable/api/python/index.html)
* [Contributor Guide](https://docs.opendp.org/en/stable/contributing/index.html)
## Getting Help
If you're having problems using OpenDP, or want to submit feedback, please reach out! Here are some ways to contact us:
<!--
All of these lists should be in sync:
- README.md
- docs/source/contributing/contact.rst
- docs/source/_templates/questions-feedback.html
- .github/ISSUE_TEMPLATE/config.yml
(although office hours are only listed here.)
-->
* Report a bug or request a feature on [Github](https://github.com/opendp/opendp/issues).
* Send general queries to [info@opendp.org](mailto:info@opendp.org), or email [security@opendp.org](mailto:security@opendp.org) if it is related to security.
* Join the conversation on [Slack](https://join.slack.com/t/opendp/shared_invite/zt-1t8rrbqhd-z8LiZiP06vVE422HJd6ciQ), or the [mailing list](https://groups.google.com/a/g.harvard.edu/g/opendp-community).
## Contributing
OpenDP is a community effort, and we welcome your contributions to its development!
If you'd like to participate, please contact us! We also have a [contribution process section in the Contributor Guide](https://docs.opendp.org/en/stable/contributing/contribution-process.html).
| text/markdown | The OpenDP Project | info@opendp.org | null | null | MIT | differential privacy | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | https://opendp.org | null | >=3.10 | [] | [] | [] | [
"deprecated",
"numpy; extra == \"numpy\"",
"randomgen>=2.0.0; extra == \"numpy\"",
"scikit-learn; extra == \"scikit-learn\"",
"numpy; extra == \"scikit-learn\"",
"randomgen>=2.0.0; extra == \"scikit-learn\"",
"polars==1.32.0; extra == \"polars\"",
"pyarrow; extra == \"polars\"",
"scikit-learn; extra... | [] | [] | [] | [
"Source, https://github.com/opendp/opendp",
"Issues, https://github.com/opendp/opendp/issues",
"Documentation, https://docs.opendp.org/"
] | twine/6.1.0 CPython/3.13.11 | 2026-02-21T05:49:45.709286 | opendp-0.14.1a20260220001-cp310-abi3-musllinux_1_2_x86_64.whl | 41,829,630 | b2/9e/4130ffbe553643ec3164f901154b90470c6f3f67908944fd61d46ecd0a1a/opendp-0.14.1a20260220001-cp310-abi3-musllinux_1_2_x86_64.whl | cp310 | bdist_wheel | null | false | 697abffd0456fda179ca67254ba9474d | 09491ea00446602881ec53b617eec4048037dec6895437915c557eb54024508a | b29e4130ffbe553643ec3164f901154b90470c6f3f67908944fd61d46ecd0a1a | null | [
"LICENSE"
] | 261 |
2.4 | coloraide | 8.4 | A color library for Python. | [![Donate via PayPal][donate-image]][donate-link]
[![Coverage Status][codecov-image]][codecov-link]
[![PyPI Version][pypi-image]][pypi-link]
[![PyPI Downloads][pypi-down]][pypi-link]
[![PyPI - Python Version][python-image]][pypi-link]
[![License][license-image-mit]][license-link]
# ColorAide
## Overview
ColorAide is a pure Python, object oriented approach to colors.
```python
>>> from coloraide import Color
>>> c = Color("red")
>>> c.to_string()
'rgb(255 0 0)'
>>> c.convert('hsl').to_string()
'hsl(0 100% 50%)'
>>> c.set("lch.chroma", 30).to_string()
'rgb(173.81 114.29 97.218)'
>>> Color("blue").mix("yellow", space="lch").to_string()
'rgb(255 65.751 107.47)'
```
ColorAide particularly has a focus on the following:
- Accurate colors.
- Proper round tripping (where reasonable).
- Be generally easy to pick up for the average user.
- Support modern CSS color spaces and syntax.
- Make accessible many new and old non-CSS color spaces.
- Provide a number of useful utilities such as interpolation, color distancing, blending, gamut mapping, filters,
correlated color temperature, color vision deficiency simulation, etc.
- Provide a plugin API to extend supported color spaces and approaches to various utilities.
- Allow users to configure defaults to their liking.
With ColorAide, you can specify a color, convert it to other color spaces, mix it with other colors, output it in
different CSS formats, and much more!
# Documentation
https://facelessuser.github.io/coloraide
## License
MIT
[codecov-image]: https://img.shields.io/codecov/c/github/facelessuser/coloraide/main.svg?logo=codecov&logoColor=aaaaaa&labelColor=333333
[codecov-link]: https://codecov.io/github/facelessuser/coloraide
[pypi-image]: https://img.shields.io/pypi/v/coloraide.svg?logo=pypi&logoColor=aaaaaa&labelColor=333333
[pypi-down]: https://img.shields.io/pypi/dm/coloraide.svg?logo=pypi&logoColor=aaaaaa&labelColor=333333
[pypi-link]: https://pypi.python.org/pypi/coloraide
[python-image]: https://img.shields.io/pypi/pyversions/coloraide?logo=python&logoColor=aaaaaa&labelColor=333333
[license-image-mit]: https://img.shields.io/badge/license-MIT-blue.svg?labelColor=333333
[license-link]: https://github.com/facelessuser/coloraide/blob/main/LICENSE.md
[donate-image]: https://img.shields.io/badge/Donate-PayPal-3fabd1?logo=paypal
[donate-link]: https://www.paypal.me/facelessuser
| text/markdown | null | Isaac Muse <Isaac.Muse@gmail.com> | null | null | null | color, color-contrast, color-conversion, color-difference, color-filters, color-harmonies, color-interpolation, color-manipulation, color-spaces, color-temperature, color-vision-deficiency, colour, css | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python ::... | [] | null | null | >=3.9 | [] | [] | [] | [
"typing-extensions; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://github.com/facelessuser/coloraide"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:48:36.972836 | coloraide-8.4.tar.gz | 20,414,877 | fc/2f/137decfe89f2b36ffe39e303bbcbca2fd1b2fe3c4dc9134a55ec2129b028/coloraide-8.4.tar.gz | source | sdist | null | false | fe86c6ff10641cc974a51eb6a4ea639f | 55c7f8c4a97fa470c6726a1b1b4e71d2fc0f12e1782069aa13587882f3159363 | fc2f137decfe89f2b36ffe39e303bbcbca2fd1b2fe3c4dc9134a55ec2129b028 | MIT | [
"LICENSE.md"
] | 360 |
2.4 | jubilee | 0.39 | Lightweight application framework built on pygame | # README.md
## Introduction
Raspberry Pi devices and other single-board computers (SBCs) provide an exciting platform for small-scale computing for hobbyist projects.
A common problem with such projects is the gap between a newly configured device, such as a fresh install of Raspberry Pi OS, and a component that is ready to be programmed for a project. That gap includes a host of basic questions like:
* How do I render text and use fonts?
* How do I draw graphics and display images?
* How do I render basic UI elements, like buttons that detect and respond to user input?
* How do I factor an application into modes with mode-specific processing and drawing?
* How do I separate UI rendering and input event handling from background processing?
* How do I enable communication among a UI-rendering process and background processes?
* How do I enforce desired frame rates for the UI and background processing?
* How do I load images, sounds, and other assets into usable libraries?
* How do I render sprites with animation sequences?
* How do I apply visual effects like popover messages and fade transitions?
* How do I apply a screen orientation other than the native orientation of the display?
* How do I play sounds and music?
* How do I receive and handle keyboard input?
* How do I create a scripted application that transitions among a set of modes and submodes?
* How do I maintain a shared and persistent application state?
* How do I handle application configuration and logging?
* How do I design an application for development on a workstation with mouse input, and for deployment on a device with touch input?
Many of these questions are addressed by libraries like pygame, but the features of such libraries are often too low-level for the sophisticated functionality of many applications. As a result, developers often need to spend time on infrastructure code - time which would be more effectively (and enjoyably) devoted to writing code for the project.
Jubilee is a lightweight app engine built on pygame. The purpose of Jubilee is to enable the rapid development of applications that typically have a main GUI process and one or more background worker processes, with support for graphical UI features, interprocess messaging, and application modes.
## Hello, World
Here is a simple Jubilee application with one mode and one background worker:
```
import jubilee
class HelloMode(jubilee.Mode):
def init(self):
self.name = 'Hello Mode'
def process(self):
pass # mode-specific UI processing can occur here
def draw(self):
self.app.center_text('Hello, World!')
class HelloWorker(jubilee.Worker):
def init(self):
self.name = 'Hello Worker'
def process(self):
pass # high-frequency background processing can occur here
def process_periodic(self):
pass # low-frequency background processing can occur here
class HelloApp(jubilee.App):
def init(self):
self.name = 'Hello App'
self.add_mode(HelloMode)
self.add_worker(HelloWorker)
if __name__ == '__main__':
HelloApp().run()
```
This application contains one defined mode, which is automatically selected as the initial mode of the application. The App executes a loop that calls the `process` and `draw` methods of the mode (default 20 Hz each). The mode `draw` method displays "Hello, World!" in the center of the screen. The application also creates a background worker that runs as a separate process. The worker calls `process` frequently (default 20 Hz) and `process_periodic` occasionally (default 1 Hz).
Jubilee applications can include a rich set of modes with UI elements and navigation, multiple workers, and features like submodes, scripting, and sprite-based animations - while maintaining the simple architecture and stylistic readability shown above.
## Examples
The Examples folder contains a variety of example projects that run right out of the box:
* **Hello** - A Hello, World! project with an App class and a Worker (background) class.
* **Headless** - A project with no display. (Can still play sound and music.)
* **Pointer** - A project that demonstrates pointer input (and simple graphics).
* **Images** - A project that demonstrates images and sprite animations.
* **Image_Effects** - A project that demonstrates image effects.
* **Sound** - A project that demonstrates sound and music.
* **Controls** - A project that demonstrates various UI controls.
* **Modes** - A project that demonstrates two modes, packaged into two Mode classes.
* **Submodes** - A project that demonstrates submodes.
* **Script** - A project that demonstrates mode scripting.
* **Screen_Rotation** - A project that demonstrates 180-degree screen rotation. (Can also change screen_rotation in config.txt to 90 or 270.)
These projects can be used for quick reference, as sandboxes to experiment with the features, or as templates for new projects with similar features.
## Overview of Architecture and Features
Jubilee executes one process (App) that handles the display and input events, and one or more Worker processes that independently execute background processing, typically on different CPU cores.
App runs in a loop that calls `process` to handle UI processing and `draw` to draw to the screen (unless App is declared as headless). The Worker runs in a loop that frequently calls `process` (e.g., at 20 Hz) and occasionally calls `process_periodic` (e.g., at 1 Hz).
The App communicates with each Worker using a pair of message queues - an `app_queue` for transmitting messages from the App to the Worker, and a `worker_queue` for transmitting messages from the Worker to the App. Every message is a JSON object that is automatically serialized (via `json.dumps`) for transmission and deserialized (via `json.loads`) upon receipt.
The App can store a set of Modes, and can transition between them with `set_mode`. Each Mode includes a set of methods: `init` (called at application start), `enter` (called during `set_mode`), `click` (called to handle click events), `process`, `draw`, and `exit`. The Mode class includes a `controls` array, and instances of UI controls, such as Buttons, can be added to the mode during `init` or later. Jubilee automatically renders UI controls and handles click detection by invoking a click handler method. Buttons can be configured to execute a handler function, to switch automatically to a target mode, or to exit the application.
As shown in the Hello World app, a Jubilee app features subclasses of the App, Worker, and Mode classes that replace the stub methods with app-specific functionality. For example, each Mode subclass can provide app-specific `process` and `draw` methods, and each Worker subclass can provide app-specific `process` and `process_periodic` methods. This simple architecture simplifies and accelerates the application development process.
Additional features:
* **Configuration:** Some features of Jubilee are declared in `config.txt`, which App and Worker load during initialization. The App class features a set of default values in case `config.txt` is not present. During runtime, one Worker class (by default, the first one created) periodically checks the modification date of `config.txt` and pushes updates to the App via the messaging queue, which reduces redundant file-system operations and preserves the lifespan of flash-based storage.
* **Global, Persistent Application State:** The App stores an `app_state` dict containing application-wide data for all Modes. App state is saved incrementally via `set_app_state()` during normal operation and reloaded at startup to persist the state of the App. The App can also provide a default `app_state` to be used when a saved `app_state` is not found on startup.
* **Mode Contexts:** Jubilee can run in a modal context, where each app process loop invokes the `process` method for the current mode, or a modeless context, where each app process loop invokes the `process` method for *every* mode. Jubilee also supports a no-display ("headless") context that runs without any graphics or mouse / touch functionality, while maintaining all `process` methods and keyboard input checking.
* **Submodes:** A Mode can include a number of submodes. Creating a submode is easy - just add relevant methods that include the name of the submode (e.g., for a submode called "menu," add methods like `enter_menu`, `click_menu`, `process_menu`, and `draw_menu`). During execution, a submode can be selected (e.g., `app.mode.set_submode('menu')`), and the App will automatically call the submode-specific methods.
* **Scripting:** For applications that require a particular sequence of states and substates, Jubilee supports the definition of a script. The script defines a set of "scenes," each indicating a mode and a set of parameters, optionally including a submode. The app or any mode can call `app.advance_scene()` to advance to the next scene in the script, or `app.select_scene(scene_id)` to jump to a particular scene by name or number.
* **Resource Libraries:** Jubilee looks for folders called images and sounds in the main app folder and automatically loads them into app-level resource libraries. For each mode, Jubilee also looks for a subfolder of the same name, looks for further images and sounds subfolders for the mode, and automatically loads mode-level resource libraries. Image `blit` and `play_sound` methods can use resources specified by names. Jubilee will look first in the library for the current mode, then in the application library, and finally will try to load the resource using the name as a path.
* **Animations and Sprites:** Each images folder can contain a subfolder for an animation as a set of images corresponding to animation frames. Each subfolder is loaded into an Animation as a set of frames. For each mode, a set of Sprites can be generated, each having an Animation object, x/y coordinates, and a current animation frame number. The mode `draw` method can render all of the sprites for the mode by calling `mode.render_sprites()`. A sprite can be configured to animate automatically through all frames of an Animation at a given rate. Further, if some frames for an Animation are named consistently and numbered - e.g.: `walk_left_1.jpg`, `walk_left_2.jpg`, etc. - then the Animation generates a Sequence, indexed by the shared name `walk_left`, and containing a list of the indexes in the Animation `frames` list that correspond to the animation sequence. A Sprite can be set to a specific Sequence in a given Animation, and can animate (automatically or on-demand) over the frames of the Sequence.
* **Input:** On Linux, Jubilee will handle touch input if the config parameter pointer_input is True. On macOS, Jubilee automatically handles mouse events. Jubilee also stores and provides keyboard input on a key basis (`new_keys` for newly pressed keys and `held_keys` for all keys that are currently down) and as a keyboard buffer (`keyboard_buffer` as a string and `keyboard_buffer_chars` as an array of keys).
* **Screen Rotation:** SDL2 does not support 90/180/270-degree hardware screen rotation. Jubilee enables screen rotation by inserting an additional surface between the screen and the drawing functions to receive all of the graphical content, and then applying a pygame rotation to the surface before blitting it to the screen. This is extremely inefficient and likely to be very slow, but it is the only real option, as the architecture of SDL2 apparently cannot be adapted to include hardware support for screen rotation. This issue is addressed in SDL3, so this functionality will likely be greatly improved once SDL3 support is added to pygame.
| text/markdown | null | David Stein <jubilee@steinemail.com> | null | null | null | pygame | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers"
] | [] | null | null | null | [] | [] | [] | [
"evdev; sys_platform == \"linux\"",
"numpy",
"psutil; sys_platform == \"linux\"",
"pygame",
"random_user_agent",
"requests"
] | [] | [] | [] | [
"Homepage, https://github.com/neuron-whisperer/jubilee",
"Documentation, https://github.com/neuron-whisperer/jubilee/blob/main/Jubilee%20Reference.md",
"Repository, https://github.com/neuron-whisperer/jubilee.git",
"Issues, https://github.com/neuron-whisperer/jubilee/issues"
] | twine/6.2.0 CPython/3.13.4 | 2026-02-21T05:45:21.691557 | jubilee-0.39.tar.gz | 60,697 | d4/45/8bbcda3ec12dfbaf8c842d764a828f02095a1a937764867ac9853507e88b/jubilee-0.39.tar.gz | source | sdist | null | false | b815294b19aa0d0c85b7399dba63d8ae | 6500231e0b6d1d67e749b3e19260000a171d8e0ea69e458d7fa7813bbc01c5ec | d4458bbcda3ec12dfbaf8c842d764a828f02095a1a937764867ac9853507e88b | GPL-3.0-or-later | [
"LICENSE"
] | 234 |
2.4 | bdshare | 1.2.0 | A utility for crawling historical and Real-time Quotes of DSE(Dhaka Stock Exchange) | # bdshare

[](https://bdshare.readthedocs.io/en/latest/?badge=latest)



**bdshare** is a Python library for fetching live and historical market data from the Dhaka Stock Exchange (DSE). It handles scraping, retries, caching, and rate limiting so you can focus on your analysis.
---
## Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Core Concepts](#core-concepts)
- [Usage Guide](#usage-guide)
- [Live Trading Data](#live-trading-data)
- [Historical Data](#historical-data)
- [Market & Index Data](#market--index-data)
- [News & Announcements](#news--announcements)
- [Saving to CSV](#saving-to-csv)
- [OOP Client (BDShare)](#oop-client-bdshare)
- [Error Handling](#error-handling)
- [API Reference](#api-reference)
- [Examples](#examples)
- [Contributing](#contributing)
- [Roadmap](#roadmap)
- [Disclaimer](#disclaimer)
---
## Installation
**Requirements:** Python 3.7+
```bash
pip install bdshare
```
Install from source (latest development version):
```bash
pip install -U git+https://github.com/rochi88/bdshare.git
```
Dependencies installed automatically: `pandas`, `requests`, `beautifulsoup4`, `lxml`
---
## Quick Start
```python
from bdshare import get_current_trade_data, get_hist_data
# Live prices for all instruments
df = get_current_trade_data()
print(df.head())
# Historical data for a specific symbol
df = get_hist_data('2024-01-01', '2024-01-31', 'GP')
print(df.head())
```
Or use the object-oriented client:
```python
from bdshare import BDShare
with BDShare() as bd:
print(bd.get_market_summary())
print(bd.get_current_trades('ACI'))
```
---
## Core Concepts
| Concept | Details |
|---|---|
| **Retries** | All network calls retry up to 3 times with exponential back-off |
| **Fallback URL** | Every request has a primary and an alternate DSE endpoint |
| **Caching** | The `BDShare` client caches responses automatically (configurable TTL) |
| **Rate limiting** | Built-in sliding-window limiter (5 calls/second) prevents being blocked |
| **Errors** | All failures raise `BDShareError` — never silent |
---
## Usage Guide
### Live Trading Data
```python
from bdshare import get_current_trade_data, get_dsex_data, get_current_trading_code
# All instruments — returns columns: symbol, ltp, high, low, close, ycp, change, trade, value, volume
df = get_current_trade_data()
# Single instrument (case-insensitive)
df = get_current_trade_data('GP')
# DSEX index entries
df = get_dsex_data()
# Just the list of tradeable symbols
codes = get_current_trading_code()
print(codes['symbol'].tolist())
```
### Historical Data
```python
from bdshare import get_hist_data, get_basic_hist_data, get_close_price_data
import datetime as dt
start = '2024-01-01'
end = '2024-03-31'
# Full historical data (ltp, open, high, low, close, volume, trade, value…)
# Indexed by date, sorted newest-first
df = get_hist_data(start, end, 'ACI')
# Simplified OHLCV — sorted oldest-first, ready for TA libraries
df = get_basic_hist_data(start, end, 'ACI')
# Set date as index explicitly
df = get_basic_hist_data(start, end, 'ACI', index='date')
# Rolling 2-year window
end = dt.date.today()
start = end - dt.timedelta(days=2 * 365)
df = get_basic_hist_data(str(start), str(end), 'GP')
# Close prices only
df = get_close_price_data(start, end, 'ACI')
```
> **Column order note:** `get_basic_hist_data` intentionally returns OHLCV in standard order
> (`open`, `high`, `low`, `close`, `volume`) to be compatible with libraries like `ta`, `pandas-ta`, and `backtrader`.
### Market & Index Data
```python
from bdshare import (
get_market_info,
get_market_info_more_data,
get_market_depth_data,
get_latest_pe,
get_sector_performance,
get_top_gainers_losers,
get_company_info,
)
# Last 30 days of market summary (DSEX, DSES, DS30, DGEN, volumes, market cap)
df = get_market_info()
# Historical market summary between two dates
df = get_market_info_more_data('2024-01-01', '2024-03-31')
# Order book (buy/sell depth) for a symbol
df = get_market_depth_data('ACI')
# P/E ratios for all listed companies
df = get_latest_pe()
# Sector-wise performance
df = get_sector_performance()
# Top 10 gainers and losers (adjust limit as needed)
df = get_top_gainers_losers(limit=10)
# Detailed company profile
tables = get_company_info('GP')
```
### News & Announcements
```python
from bdshare import get_news, get_agm_news, get_all_news
# Unified dispatcher — news_type: 'all' | 'agm' | 'corporate' | 'psn'
df = get_news(news_type='all')
df = get_news(news_type='agm')
df = get_news(news_type='corporate', code='GP')
df = get_news(news_type='psn', code='ACI') # price-sensitive news
# Direct function calls
df = get_agm_news() # AGM / dividend declarations
df = get_all_news(code='BEXIMCO') # All news for one symbol
df = get_all_news('2024-01-01', '2024-03-31', 'GP') # Filtered by date + symbol
```
### Saving to CSV
```python
from bdshare import get_basic_hist_data, Store
import datetime as dt
end = dt.date.today()
start = end - dt.timedelta(days=365)
df = get_basic_hist_data(str(start), str(end), 'GP')
Store(df).save() # saves to current directory as a CSV
```
---
## OOP Client (BDShare)
The `BDShare` class wraps all functions with automatic caching and rate limiting.
```python
from bdshare import BDShare
bd = BDShare(cache_enabled=True) # cache_enabled=True is the default
```
### Context manager (auto-cleans cache and session)
```python
with BDShare() as bd:
data = bd.get_current_trades('GP')
```
### Market methods
```python
bd.get_market_summary() # DSEX/DSES/DS30 indices + stats (1-min TTL)
bd.get_company_profile('ACI') # Company profile (1-hr TTL)
bd.get_latest_pe_ratios() # All P/E ratios (1-hr TTL)
bd.get_top_movers(limit=10) # Top gainers/losers (5-min TTL)
bd.get_sector_performance() # Sector breakdown (5-min TTL)
```
### Trading methods
```python
bd.get_current_trades() # All live prices (30-sec TTL)
bd.get_current_trades('GP') # Single symbol
bd.get_dsex_index() # DSEX index entries (1-min TTL)
bd.get_trading_codes() # All tradeable symbols (24-hr TTL)
bd.get_historical_data('GP', '2024-01-01', '2024-03-31') # OHLCV history
```
### News methods
```python
bd.get_news(news_type='all') # All news (5-min TTL)
bd.get_news(news_type='corporate', code='GP')
bd.get_news(news_type='psn') # Price-sensitive news
```
### Utility methods
```python
bd.clear_cache() # Flush all cached data
bd.configure(proxy_url='http://proxy:8080')
print(bd.version) # Package version string
```
---
## Error Handling
All failures raise `BDShareError`. Never catch bare `Exception` — you'll miss bugs.
```python
from bdshare import BDShare, BDShareError
bd = BDShare()
try:
df = bd.get_historical_data('INVALID', '2024-01-01', '2024-01-31')
except BDShareError as e:
print(f"DSE error: {e}")
# safe fallback logic here
```
Common causes of `BDShareError`:
- Symbol not found in the response table
- DSE site returned a non-200 status after all retries
- Table structure changed on the DSE page (report as a bug)
- Network timeout
---
## API Reference
### Trading Functions
| Function | Parameters | Returns | Description |
|---|---|---|---|
| `get_current_trade_data(symbol?)` | `symbol: str` | DataFrame | Live prices (all or one symbol) |
| `get_dsex_data(symbol?)` | `symbol: str` | DataFrame | DSEX index entries |
| `get_current_trading_code()` | — | DataFrame | All tradeable symbols |
| `get_hist_data(start, end, code?)` | `str, str, str` | DataFrame | Full historical OHLCV |
| `get_basic_hist_data(start, end, code?, index?)` | `str, str, str, str` | DataFrame | Simplified OHLCV (TA-ready) |
| `get_close_price_data(start, end, code?)` | `str, str, str` | DataFrame | Close + prior close |
| `get_last_trade_price_data()` | — | DataFrame | Last trade from DSE text file |
### Market Functions
| Function | Parameters | Returns | Description |
|---|---|---|---|
| `get_market_info()` | — | DataFrame | 30-day market summary |
| `get_market_info_more_data(start, end)` | `str, str` | DataFrame | Historical market summary |
| `get_market_depth_data(symbol)` | `str` | DataFrame | Order book (buy/sell depth) |
| `get_latest_pe()` | — | DataFrame | P/E ratios for all companies |
| `get_company_info(symbol)` | `str` | list[DataFrame] | Detailed company tables |
| `get_sector_performance()` | — | DataFrame | Sector-wise performance |
| `get_top_gainers_losers(limit?)` | `int` (default 10) | DataFrame | Top movers |
### News Functions
| Function | Parameters | Returns | Description |
|---|---|---|---|
| `get_news(news_type?, code?)` | `str, str` | DataFrame | Unified news dispatcher |
| `get_agm_news()` | — | DataFrame | AGM / dividend declarations |
| `get_all_news(start?, end?, code?)` | `str, str, str` | DataFrame | All DSE news |
| `get_corporate_announcements(code?)` | `str` | DataFrame | Corporate actions |
| `get_price_sensitive_news(code?)` | `str` | DataFrame | Price-sensitive news |
### `get_news` `news_type` values
| Value | Equivalent direct function |
|---|---|
| `'all'` | `get_all_news()` |
| `'agm'` | `get_agm_news()` |
| `'corporate'` | `get_corporate_announcements()` |
| `'psn'` | `get_price_sensitive_news()` |
---
## Examples
### Stock performance summary
```python
import datetime as dt
from bdshare import BDShare, BDShareError
def summarize(symbol: str, days: int = 30) -> dict:
end = dt.date.today()
start = end - dt.timedelta(days=days)
with BDShare() as bd:
try:
df = bd.get_historical_data(symbol, str(start), str(end))
except BDShareError as e:
print(f"Could not fetch data: {e}")
return {}
return {
'symbol': symbol,
'current': df['close'].iloc[0],
'period_high': df['high'].max(),
'period_low': df['low'].min(),
'avg_volume': df['volume'].mean(),
'change_pct': (df['close'].iloc[0] - df['close'].iloc[-1])
/ df['close'].iloc[-1] * 100,
}
result = summarize('GP', days=30)
print(f"{result['symbol']}: {result['change_pct']:.2f}% over 30 days")
```
### Simple portfolio tracker
```python
from bdshare import BDShare, BDShareError
PORTFOLIO = {
'GP': {'qty': 100, 'cost': 450.50},
'ACI': {'qty': 50, 'cost': 225.75},
'BEXIMCO': {'qty': 200, 'cost': 125.25},
}
with BDShare() as bd:
total_cost = total_value = 0
for symbol, pos in PORTFOLIO.items():
try:
row = bd.get_current_trades(symbol).iloc[0]
price = row['ltp']
market_value = pos['qty'] * price
cost = pos['qty'] * pos['cost']
pnl = market_value - cost
print(f"{symbol:10s} price={price:8.2f} P&L={pnl:+10.2f}")
total_cost += cost
total_value += market_value
except BDShareError as e:
print(f"{symbol}: fetch error — {e}")
print(f"\nPortfolio P&L: {total_value - total_cost:+.2f} "
f"({(total_value/total_cost - 1)*100:+.2f}%)")
```
### Fetch and screen top gainers above 5 %
```python
from bdshare import get_top_gainers_losers
df = get_top_gainers_losers(limit=20)
big_movers = df[df['change'] > 5]
print(big_movers[['symbol', 'ltp', 'change']])
```
---
## Contributing
Contributions are welcome! To get started:
```bash
git clone https://github.com/rochi88/bdshare.git
cd bdshare
pip install -e ".[dev]"
pytest
```
Please open an issue before submitting a pull request for significant changes. See [CONTRIBUTING.md](CONTRIBUTING.md) for the full guide.
---
## Roadmap
- [ ] Chittagong Stock Exchange (CSE) support
- [ ] WebSocket streaming for real-time ticks
- [ ] Built-in technical indicators (`ta` integration)
- [ ] Portfolio management helpers
- [ ] Docker demo examples
- [x] Shared session with exponential back-off
- [x] `lxml`-based fast parsing
- [x] `BDShareError` for clean error handling
- [x] Unified `get_news()` dispatcher
- [x] Rate limiter and response caching
---
## Support
- **Docs:** [bdshare.readthedocs.io](https://bdshare.readthedocs.io/)
- **Bugs / Features:** [GitHub Issues](https://github.com/rochi88/bdshare/issues)
- **Discussion:** [GitHub Discussions](https://github.com/rochi88/bdshare/discussions)
---
## License
MIT — see [LICENSE](LICENSE) for details.
## Disclaimer
bdshare is intended for educational and research use. Always respect DSE's terms of service. The authors are not responsible for financial decisions made using this library.
# Change log
All notable changes to **bdshare** are documented here.
Format follows [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
Versioning follows [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
---
## [1.2.0] - 2026-02-21
### Changed
- Updated
## [1.1.6] - 2026-02-20
### Changed
- Updated `readthedocs` structure
## [1.1.5] - 2026-02-20
### Changed
- Renamed `get_hist_data()` → `get_historical_data()` (improved readability)
- Renamed `get_basic_hist_data()` → `get_basic_historical_data()` (improved readability)
- Renamed `get_market_inf()` → `get_market_info()` (improved readability)
- Renamed `get_market_inf_more_data()` → `get_market_info_more_data()` (improved readability)
- Renamed `get_company_inf()` → `get_company_info()` (improved readability)
### Deprecated
- `get_hist_data()` — still callable but emits `DeprecationWarning`; will be removed in 2.0.0
- `get_basic_hist_data()` — still callable but emits `DeprecationWarning`; will be removed in 2.0.0
- `get_market_inf()` — still callable but emits `DeprecationWarning`; will be removed in 2.0.0
- `get_market_inf_more_data()` — still callable but emits `DeprecationWarning`; will be removed in 2.0.0
- `get_company_inf()` — still callable but emits `DeprecationWarning`; will be removed in 2.0.0
### Added
- `BDShareError` custom exception — all network and scraping failures now raise this instead of silently printing and returning `None`
- Shared `requests.Session` (`_session`) across all modules — reuses TCP connections for significantly faster repeated calls
- Exponential back-off retry logic in `safe_get()` — pauses 0.2 s → 0.4 s → 0.8 s between attempts before raising `BDShareError`
- Fallback URL support in `safe_get()` — primary and alternate DSE endpoints tried within the same retry attempt
- `lxml`-based HTML parsing in `_fetch_table()` with `html.parser` fallback — replaces `html5lib` (~10× faster)
- `_safe_num()` helper — all scraped values now returned as typed numerics (`float`/`int`) instead of raw strings
- `_parse_trade_rows()` and `_filter_symbol()` internal helpers in `trading.py` — eliminate duplicated parsing logic between `get_current_trade_data()` and `get_dsex_data()`
- `get_news()` unified dispatcher — accepts `news_type` of `'all'`, `'agm'`, `'corporate'`, or `'psn'`
- `get_corporate_announcements()` and `get_price_sensitive_news()` — previously missing functions now fully implemented
- Column count guards (`len(cols) < N`) across all table parsers — malformed rows are skipped rather than raising `IndexError`
- Backward-compatibility aliases for all renamed functions with `DeprecationWarning`
- Type hints throughout all public functions
- Comprehensive docstrings with parameter and return documentation
### Fixed
- `get_agm_news()`: corrected field name typo `agmData` → `agmDate`
- `get_agm_news()`: corrected field name typo `vanue` → `venue`
- `get_market_depth_data()`: no longer creates a new `requests.Session` on every retry iteration
- `get_basic_hist_data()`: redundant double `sort_index()` call removed
- `get_hist_data()` and `get_close_price_data()`: no longer silently return `None` on empty results
- `RateLimiter` in `__init__.py`: switched from `time.time()` to `time.monotonic()` for reliable elapsed-time measurement
### Removed
- `html5lib` as the default parser — replaced by `lxml` with `html.parser` fallback
- Silent `print(e)` error handling — all error paths now raise `BDShareError`
- Dead `timeout` parameter from `BDShare.configure()` — it had no effect
## [1.1.4] - 2025-09-16
### Added
- Enhanced error handling and robustness across all functions
- Improved parameter handling for news functions
- Better file path resolution for utility functions
- Comprehensive fallback mechanisms for network issues
### Changed
- Fixed get_all_news() function to support date range parameters as documented
- Enhanced market info functions with better error handling
- Improved Store utility with proper file saving mechanism
- Fixed Tickers utility with correct file path resolution
### Fixed
- All major function issues identified in testing (18/18 functions now working)
- Parameter signature mismatches in news functions
- HTML parsing errors in market data functions
- File saving issues in Store utility
- Missing tickers.json file dependency
## [1.1.2] - 2024-12-31
### Added
- n/a
### Changed
- update tests
### Fixed
- n/a
## [1.1.1] - 2024-12-31
### Added
- n/a
### Changed
- update runner
### Fixed
- n/a
## [1.1.0] - 2024-12-31
### Added
- new function for getting company info
### Changed
- n/a
### Fixed
- n/a
## [1.0.4] - 2024-12-30
### Added
- n/a
### Changed
- changed lint
### Fixed
- fixed typo
## [1.0.3] - 2024-07-29
### Added
- n/a
### Changed
- n/a
### Fixed
- check fix for latest P/E url [#6]
## [1.0.2] - 2024-07-29
### Added
- n/a
### Changed
- n/a
### Fixed
- fixed latest P/E url [#6]
## [1.0.0] - 2024-03-04
### Added
- Updated docs
### Changed
- n/a
## [0.7.2] - 2024-03-04
### Added
- Updated docs
### Changed
- n/a
## [0.7.1] - 2024-03-04
### Added
- n/a
### Changed
- fixed market depth data api
## [0.7.0] - 2024-03-04
### Added
- n/a
### Changed
- n/a
## [0.6.0] - 2024-03-03
### Added
- n/a
### Changed
- n/a
## [0.5.1] - 2024-02-29
### Added
- n/a
### Changed
- n/a
## [0.5.0] - 2024-02-29
### Added
- fixed store datafrave to csv file method
### Changed
- n/a
## [0.4.0] - 2023-03-12
### Added
- n/a
### Changed
- changed package manager
## [0.3.2] - 2022-10-10
### Added
- n/a
### Changed
- n/a
## [0.3.1] - 2022-06-15
### Added
- n/a
### Changed
- n/a
## [0.2.1] - 2021-08-01
### Added
-
### Changed
- `get_current_trading_code()`
## [0.2.0] - 2021-06-01
### Added
- added get_market_depth_data
- added get_dsex_data
- added 'dse.com.bd' as redundant
### Changed
- Changed documentation
- changed get_agm_news
- changed get_all_news
## [0.1.4] - 2020-08-22
### Added
- added get_market_inf_more_data
### Changed
- Changed documentation
## [0.1.3] - 2020-08-20
### Added
- html5lib
- added get params
### Changed
- post request to get
## [0.1.2] - 2020-05-21
### Added
- modified index declaration
## [0.1.1] - 2020-05-20
### Added
- modified index declaration
## [0.1.0] - 2020-04-08
### Added
- added git tag
- `VERSION.txt`
### Changed
- `setup.py`
- `HISTORY.md` to `CHANGELOG.md`
## [0.0.1] - 2020-04-06
### Added
- `get_hist_data(), get_current_trade_data()`
- `HISTORY.md`
| text/markdown | Raisul Islam | raisul.me@gmail.com | null | null | MIT | Crawling, DSE, Financial Data | [
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"requests",
"beautifulsoup4",
"html5lib",
"pandas",
"lxml"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:43:52.786527 | bdshare-1.2.0.tar.gz | 36,743 | fc/9a/1eaca9643feb92c4bc7a32a64a847b992aeba31213a14ebe526b00e606b3/bdshare-1.2.0.tar.gz | source | sdist | null | false | 42772155dba1f0d3faa31bacddda756e | 1c305773609c07b3003f93166ea9e97342291b34d290588c6b352658b2110914 | fc9a1eaca9643feb92c4bc7a32a64a847b992aeba31213a14ebe526b00e606b3 | null | [
"LICENSE"
] | 297 |
2.4 | synth-ai | 0.8.2 | Serverless Posttraining for Agents - Core AI functionality and tracing | # Synth
[](https://www.python.org/)
[](https://pypi.org/project/synth-ai/)
[](https://crates.io/crates/synth-ai)
[](LICENSE)
Prompt Optimization, Graphs, and Agent Infrastructure
Use the sdk in Python (`uv add synth-ai`) and Rust (beta) (`cargo add synth-ai`), or hit our serverless endpoints in any language
<p align="center">
<picture align="center">
<source media="(prefers-color-scheme: dark)" srcset="assets/langprobe_v2_dark.png">
<source media="(prefers-color-scheme: light)" srcset="assets/langprobe_v2_light.png">
<img alt="Shows a bar chart comparing prompt optimization performance across GPT-4.1 Nano, GPT-4o Mini, and GPT-5 Nano with baseline vs GEPA optimized." src="assets/langprobe_v2_light.png">
</picture>
</p>
<p align="center">
<i>Average accuracy on <a href="https://arxiv.org/abs/2502.20315">LangProBe</a> prompt optimization benchmarks.</i>
</p>
## Demo Walkthroughs
- [GEPA Banking77 Prompt Optimization](https://docs.usesynth.ai/cookbooks/banking77-colab)
- [GEPA Crafter VLM Verifier Optimization](https://docs.usesynth.ai/cookbooks/verifier-optimization)
- [GraphGen Image Style Matching](https://docs.usesynth.ai/cookbooks/graphs/overview)
Benchmark and demo runner source files have moved to the `Benchmarking` repo (`../Benchmarking` in a sibling checkout).
## Highlights
- 🎯 **GEPA Prompt Optimization** - Automatically improve prompts with evolutionary search. See 70%→95% accuracy gains on Banking77, +62% on critical game achievements
- 🔍 **Zero-Shot Verifiers** - Fast, accurate rubric-based evaluation with configurable scoring criteria
- 🧬 **GraphGen** - Train custom verifier graphs optimized for your specific workflows. Train custom pipelines for other tasks
- 🧰 **Environment Pools** - Managed sandboxes and browser pools for coding and computer-use agents
- 🚀 **No Code Changes** - Wrap existing code in a FastAPI app and optimize via HTTP. Works with any language or framework
- ⚡️ **Local Development** - Run experiments locally with tunneled containers. No cloud setup required
- 🗂️ **Multi-Experiment Management** - Track and compare prompts/models across runs with built-in experiment queues
## Getting Started
### SDK (Python)
```bash
pip install synth-ai==0.8.0
# or
uv add synth-ai
```
### GEPA Compatibility (Python)
Drop-in usage for `gepa-ai` style workflows:
```python
from synth_ai import gepa
trainset, valset, _ = gepa.examples.aime.init_dataset()
result = gepa.optimize(
seed_candidate={"system_prompt": "You are a helpful assistant."},
trainset=trainset,
valset=valset,
task_lm="openai/gpt-4.1-mini",
max_metric_calls=150,
reflection_lm="openai/gpt-5",
)
print(result.best_candidate["system_prompt"])
```
Requires `SYNTH_API_KEY` and access to the Synth backend.
Full Banking77 runthrough: `../Benchmarking/demos/gepa_banking77_compat.py`.
### SDK (Rust - Beta)
```bash
cargo add synth-ai
```
### TUI (Homebrew)
```bash
brew install synth-laboratories/tap/synth-ai-tui
synth-ai-tui
```
The TUI provides a visual interface for managing jobs, viewing events, and monitoring optimization runs.
## OpenCode Skills (Synth API)
The Synth-AI TUI integrates with OpenCode and ships a **`synth-api`** skill.
```bash
# List packaged skills shipped with synth-ai
uvx synth-ai skill list
```
```bash
uvx synth-ai skill install synth-api --dir ~/custom/opencode/skill
```
## Container Deploy (Cloud)
Deploy a Container with a Dockerfile and get a stable `container_url`:
```bash
export SYNTH_API_KEY=sk_live_...
synth container deploy \
--name my-container \
--app my_module:app \
--dockerfile ./Dockerfile \
--context . \
--wait
```
Use the emitted `container_url` in training configs. Harbor auth uses `SYNTH_API_KEY`
as the container API key.
## Tunnels
Synth optimization jobs need HTTPS access to your local container. Two tunnel backends are available:
### SynthTunnel (Recommended)
Relay-based tunnel — no external binary required, supports 128 concurrent requests:
```python
from synth_ai.core.tunnels import TunneledContainer
tunnel = await TunneledContainer.create(local_port=8001, api_key="sk_live_...")
print(tunnel.url) # https://st.usesynth.ai/s/rt_...
print(tunnel.worker_token) # pass to job config
```
Use with optimization jobs:
```python
job = PromptLearningJob.from_dict(
config,
container_url=tunnel.url,
container_worker_token=tunnel.worker_token,
)
```
### Cloudflare Quick Tunnel
Anonymous tunnel via trycloudflare.com — no API key needed:
```python
from synth_ai.core.tunnels import TunneledContainer, TunnelBackend
tunnel = await TunneledContainer.create(
local_port=8001,
backend=TunnelBackend.CloudflareQuickTunnel,
)
```
Requires `cloudflared` installed (`brew install cloudflared`). Use `container_api_key` instead of `worker_token` when configuring jobs.
See the [tunnels documentation](https://docs.usesynth.ai/sdk/tunnels) for the full comparison.
### Auth Basics (Don’t Mix These)
There are **three different keys** in the Container + SynthTunnel flow:
- **Synth API key** (`SYNTH_API_KEY`): Auth for the **backend** (`SYNTH_BACKEND_URL`).
- Sent as `Authorization: Bearer <SYNTH_API_KEY>`.
- Used when submitting jobs to `http://127.0.0.1:8080` (local) or `https://api.usesynth.ai` (cloud).
- **Environment API key** (`ENVIRONMENT_API_KEY`): Auth for your **container**.
- Sent as `x-api-key: <ENVIRONMENT_API_KEY>` to `/health`, `/info`, `/rollout`, etc.
- Minted/managed by `ensure_container_auth()`.
- **SynthTunnel worker token** (`tunnel.worker_token`): Auth for **tunnel relay → container**.
- Passed to jobs as `container_worker_token`.
- **Never** use this as a backend API key.
Common failures:
- `Invalid API key` on `/api/jobs/*` means the backend received the wrong key.
- `SYNTH_TUNNEL_ERROR: Invalid worker token` means the tunnel relay token is wrong.
## Branching and CI
### Branch model (all repos)
```
dev ──PR──> staging ──PR──> main
│
integration
tests run
```
| Branch | Purpose |
|-----------|----------------------------------|
| `dev` | Daily development |
| `staging` | Pre-release gate with full CI |
| `main` | Released / production code |
### How CI works for this repo
Cross-repo integration tests live in the **testing** repo (`synth-laboratories/testing`).
1. When a PR targets `staging` in `testing`, CI checks out `synth-ai` at the matching branch (e.g. `staging`). Falls back to `main` if the branch doesn't exist.
2. Tests that exercise synth-ai code:
- `synth_ai_unit_tests` — `pytest tests/unit` (runs on every push)
- `synth_ai_all_tests` — package-focused SDK tests from `synth-ai-tests/` in the `testing` repo
- `testing_unit_tests` — `pytest synth-ai-tests/unit/`
### Standard workflow
1. Work on `dev`.
2. When ready to validate, push `dev` and open a PR in `testing`: `dev -> staging`.
3. CI runs unit and cross-repo integration tests against the matching `synth-ai` branch.
4. After staging is green, merge `staging -> main` in each repo.
### Running tests locally
From the `testing` repo (sibling checkout):
```bash
cd ../testing
bazel test //:offline_tests # unit tests only
bazel test //:no_llm_tests # everything except LLM-dependent tests
bazel test //:all_tests # everything
```
Or directly:
```bash
uv run pytest tests/unit -v
```
See `testing/CLAUDE.md` for the full test tier and suite reference.
## Testing
Run the TUI integration tests:
```bash
cd tui/app
bun test
```
Synth is maintained by devs behind the [MIPROv2](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=jauNVA8AAAAJ&citation_for_view=jauNVA8AAAAJ:u5HHmVD_uO8C) prompt optimizer.
## Documentation
**[docs.usesynth.ai](https://docs.usesynth.ai)**
- GEPA proposer backend guide (spec): `../specifications/tanha/current/systems/platform/gepa_proposer_backends.md`
- GEPA guide (Mintlify): [docs.usesynth.ai/prompt-optimization/gepa](https://docs.usesynth.ai/prompt-optimization/gepa)
## Community
**[Join our Discord](https://discord.gg/cjfAMcCZef)**
## GEPA Prompt Optimization (SDK)
Run GEPA prompt optimization programmatically:
```python
import asyncio
import os
from synth_ai.sdk.api.train.prompt_learning import PromptLearningJob
from synth_ai.sdk.container import ContainerConfig, create_container
# Create a local container: app = create_container(ContainerConfig(app_id="my_app", handler=my_handler))
# Create and submit a GEPA job
pl_job = PromptLearningJob.from_dict({
"job_type": "prompt_learning",
"config": {
"prompt_learning": {
"gepa": {
"rollout": {"budget": 100},
"population_size": 10,
"generations": 5,
}
}
},
"container_id": "my_container",
})
pl_job.submit()
result = pl_job.stream_until_complete(timeout=3600.0)
print(f"Best score: {result.best_score}")
```
See the [Banking77 walkthrough](https://docs.usesynth.ai/cookbooks/banking77-colab) for a complete example with local containers.
For proposer backend selection (`prompt`, `rlm`, `agent`), see `../specifications/tanha/current/systems/platform/gepa_proposer_backends.md`.
## Online MIPRO (SDK, Ontology Enabled)
Run online MIPRO so rollouts call a proxy URL and rewards stream back to the optimizer. Enable ontology by setting `MIPRO_ONT_ENABLED=1` and `HELIX_URL` on the backend, then follow the [Banking77 online MIPRO notes](simpler_online_mipro.txt).
```python
import os
from synth_ai.sdk.optimization.policy import MiproOnlineSession
# Use the demo config shape from Benchmarking/demos (see sibling repo)
mipro_config = {...}
session = MiproOnlineSession.create(
config_body=mipro_config,
api_key=os.environ["SYNTH_API_KEY"],
)
urls = session.get_prompt_urls()
proxy_url = urls["online_url"]
# Use proxy_url in your rollout loop, then report rewards
session.update_reward(
reward_info={"score": 0.9},
rollout_id="rollout_001",
candidate_id="candidate_abc",
)
```
## Graph Evolve: Optimize RLM-Based Verifier Graphs
Train a verifier graph with an RLM backbone for long-context evaluation. See the [Image Style Matching walkthrough](https://docs.usesynth.ai/cookbooks/graphs/overview) for a complete Graph Evolve example:
```python
from synth_ai.sdk.api.train.graph_evolve import GraphEvolveJob
# Train an RLM-based verifier graph
verifier_job = GraphEvolveJob.from_dataset(
dataset="verifier_dataset.json",
graph_type="rlm",
policy_models=["gpt-4.1"],
proposer_effort="medium", # Use "medium" (gpt-4.1) or "high" (gpt-5.2)
rollout_budget=200,
)
verifier_job.submit()
result = verifier_job.stream_until_complete(timeout=3600.0)
# Run inference with trained verifier
verification = verifier_job.run_verifier(
trace=my_trace,
context={"rubric": my_rubric},
)
print(f"Reward: {verification.reward}, Reasoning: {verification.reasoning}")
```
## Zero-Shot Verifiers (SDK)
Run a built-in verifier graph with rubric criteria passed at runtime. See the [Crafter VLM demo](https://docs.usesynth.ai/cookbooks/verifier-optimization) for verifier optimization:
```python
import asyncio
import os
from synth_ai.sdk.graphs import VerifierClient
async def run_verifier():
client = VerifierClient(
base_url=os.environ["SYNTH_BACKEND_BASE"],
api_key=os.environ["SYNTH_API_KEY"],
)
result = await client.evaluate(
job_id="zero_shot_verifier_single",
trace={"session_id": "s", "session_time_steps": []},
rubric={
"event": [{"id": "accuracy", "weight": 1.0, "description": "Correctness"}],
"outcome": [{"id": "task_completion", "weight": 1.0, "description": "Completed task"}],
},
options={"event": True, "outcome": True, "model": "gpt-5-nano"},
policy_name="my_policy",
container_id="my_task",
)
return result
asyncio.run(run_verifier())
```
You can also call arbitrary graphs directly with the Rust SDK:
```rust
use serde_json::json;
use synth_ai::{GraphCompletionRequest, Synth};
#[tokio::main]
async fn main() -> Result<(), synth_ai::Error> {
let synth = Synth::from_env()?;
let request = GraphCompletionRequest {
job_id: "zero_shot_verifier_rubric_single".to_string(),
input: json!({
"trace": {"session_id": "s", "session_time_steps": []},
"rubric": {"event": [], "outcome": []},
}),
model: None,
prompt_snapshot_id: None,
stream: None,
};
let resp = synth.complete(request).await?;
println!("Output: {:?}", resp.output);
Ok(())
}
```
| text/markdown; charset=UTF-8; variant=GFM | null | Synth AI <josh@usesynth.ai> | null | null | MIT | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"pydantic>=2.0.0",
"requests>=2.32.3",
"pynacl>=1.5.0",
"tqdm>=4.66.4",
"typing-extensions>=4.0.0",
"rich>=13.9.0",
"openai>=1.99.0",
"fastapi>=0.115.12",
"uvicorn>=0.34.2",
"numpy>=2.2.3",
"sqlalchemy>=2.0.42",
"click<8.2,>=8.1.7",
"aiohttp>=3.8.0",
"nest-asyncio>=1.6.0",
"httpx>=0.28.1... | [] | [] | [] | [
"Homepage, https://github.com/synth-laboratories/synth-ai",
"Issues, https://github.com/synth-laboratories/synth-ai/issues",
"Repository, https://github.com/synth-laboratories/synth-ai"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:43:48.308853 | synth_ai-0.8.2.tar.gz | 757,676 | 95/9f/236bb65c628a72aeeab4ca883a53896a4e647d508eabbbdcece95538de78/synth_ai-0.8.2.tar.gz | source | sdist | null | false | aeb8606e0246dc799fa350369aa7dc70 | 044150a43807a1ed9c2ecf9cf3ac444b257a88cd423c65122d42e19d59c533db | 959f236bb65c628a72aeeab4ca883a53896a4e647d508eabbbdcece95538de78 | null | [
"synth_ai_py/LICENSE"
] | 275 |
2.4 | AutoRAG-Research | 0.0.4 | Automate your RAG research. | 
Automate your RAG research
[Documentation](https://nomadamas.github.io/AutoRAG-Research/)
## What is AutoRAG-Research?
| Problem | What AutoRAG-Research does |
|---------|----------------------------------------------------------------------------------|
| Every dataset has a different format. | We unify the formats and pre-computed embeddings for you. Just download and use. |
| Comparing against SOTA pipelines requires implementing each one. | We implement SOTA pipelines from papers. Benchmark yours against them. |
| Every paper claims SOTA. Which one actually is? | Run all pipelines on your data with one command and compare. |
Which pipeline is really SOTA? What datasets are out there? Find it all here.
## Available Datasets
We provide pre-processed datasets with unified formats. Some include **pre-computed embeddings**.
**Text**
| Dataset | Pipeline Support | Description |
|---------|:----------------:|-------------|
| [BEIR](https://arxiv.org/pdf/2104.08663) | Retrieval | Standard IR benchmark across 14 diverse domains (scifact, nq, hotpotqa, ...) |
| [MTEB](https://aclanthology.org/2023.eacl-main.148.pdf) | Retrieval | Large-scale embedding benchmark with any MTEB retrieval task |
| [RAGBench](https://arxiv.org/pdf/2407.11005v1) | Retrieval + Generation | End-to-end RAG evaluation with generation ground truth across 12 domains |
| [MrTyDi](https://aclanthology.org/2021.mrl-1.12.pdf) | Retrieval | Multilingual retrieval across 11 languages |
| [BRIGHT](https://arxiv.org/pdf/2407.12883) | Retrieval + Generation | Reasoning-intensive retrieval with gold answers |
**Image**
| Dataset | Pipeline Support | Description |
|---------|:-----------------------:|-------------|
| [ViDoRe](https://arxiv.org/pdf/2407.01449) | Retrieval + Generation* | Visual document QA with 1:1 query-to-page mapping |
| [ViDoRe v2](https://arxiv.org/pdf/2505.17166) | Retrieval | Visual document retrieval with corpus-level search |
| [ViDoRe v3](https://arxiv.org/pdf/2601.08620) | Retrieval | Visual document retrieval across 8 industry domains |
| [VisRAG](https://arxiv.org/pdf/2410.10594) | Retrieval + Generation* | Vision-based RAG benchmark (ChartQA, SlideVQA, DocVQA, ...) |
**Text + Image**
| Dataset | Pipeline Support | Description |
|---------|:----------------:|-------------|
| [Open-RAGBench](https://huggingface.co/datasets/vectara/open_ragbench) | Retrieval + Generation | arXiv PDF RAG with generation ground truth and multimodal understanding |
> *\* Generation ground truth is available only for some sub-datasets.*
## Available Pipelines
SOTA pipelines implemented from papers, ready to run. There are two ways to build a RAG pipeline:
### 1. Retrieval Pipeline
Standalone retrieval pipelines. Use them on their own for retrieval-only evaluation. If you also want to evaluate generation quality, combine any retrieval pipeline with an LLM using the **BasicRAG** generation pipeline — it takes a retrieval pipeline as input, feeds the retrieved results to an LLM, and produces generated answers you can evaluate with generation metrics.
| Pipeline | Description | Reference |
|----------------------------------------------------------------------------------|------------------------------------------------------------------------|-----------|
| [Vector Search (DPR)](https://aclanthology.org/2020.emnlp-main.550.pdf) | Dense vector similarity search (single-vector and multi-vector MaxSim) | EMNLP 20 |
| [BM25](https://www.staff.city.ac.uk/~sbrp622/papers/foundations_bm25_review.pdf) | Sparse full-text retrieval | - |
| [HyDE](https://arxiv.org/abs/2212.10496) | Hypothetical Document Embeddings | ACL 23 |
| [Hybrid RRF](https://cormack.uwaterloo.ca/cormacksigir09-rrf.pdf) | Reciprocal Rank Fusion of two retrieval pipelines | - |
| [Hybrid CC](https://arxiv.org/pdf/2210.11934) | Convex Combination fusion of two retrieval pipelines | - |
### 2. Generation Pipeline
These pipelines handle retrieval and generation together as a single algorithm. Each implements a specific paper's approach end-to-end.
| Pipeline | Description | Reference |
|----------------------------------------------|--------------------------------------------------------|---------------|
| [BasicRAG](https://arxiv.org/pdf/2005.11401) | Any retrieval pipeline + LLM | NeurIPS 20 |
| [IRCoT](https://arxiv.org/abs/2212.10509) | Interleaving Retrieval with Chain-of-Thought | ACL 23 |
| [ET2RAG](https://arxiv.org/abs/2511.01059) | Majority voting on context subsets | Preprint / 25 |
| [VisRAG](https://arxiv.org/abs/2410.10594) | Vision-language model generation from retrieved images | ICLR 25 |
| [MAIN-RAG](https://arxiv.org/abs/2501.00332) | Multi-Agent Filtering RAG | ACL 25 |
## Available Metrics
**Retrieval** — Set-based: Recall, Precision, F1 / Rank-aware: nDCG, MRR, MAP
**Generation** — N-gram based: BLEU, METEOR, ROUGE / Embedding based: BERTScore, SemScore
> **Missing something?** [Open an issue](https://github.com/vkehfdl1/AutoRAG-Research/issues) and we will implement it. Or check our [Plugin](https://nomadamas.github.io/AutoRAG-Research/plugins/) guide.
## Setup
### Install
> We strongly recommend using [uv](https://docs.astral.sh/uv/) as your virtual environment manager. If you use uv, you **must** activate the virtual environment first — otherwise the CLI will not use your uv environment.
**Option 1: Install Script (Recommended, Mac OS / Linux)**
The install script handles Python environment, package installation, and PostgreSQL setup in one go.
```bash
curl -LsSf https://raw.githubusercontent.com/NomaDamas/AutoRAG-Research/main/scripts/install.sh -o install.sh
bash install.sh
```
<details>
<summary><b>Manual Install</b></summary>
1. Create and activate a virtual environment (Python 3.10+):
```bash
# uv (recommended)
uv venv .venv --python ">=3.10"
source .venv/bin/activate
# or standard venv
python3 -m venv .venv
source .venv/bin/activate
```
2. Install the package:
```bash
# uv (recommended)
uv add autorag-research
# or pip
pip install autorag-research
```
3. Set up PostgreSQL with VectorChord (Docker recommended):
```bash
autorag-research init
cd postgresql && docker compose up -d
```
4. Initialize configuration files:
```bash
autorag-research init
```
This creates `configs/` with database, pipeline, metric, and experiment YAML files.
Now you can edit YAML files to setup your own experiments.
</details>
### Quick Start
```bash
# 1. See available datasets
autorag-research show datasets
# 2-1. Ingest a dataset
autorag-research ingest --name beir --extra dataset-name=scifact
# 2-2. Or download a pre-ingested dataset including pre-computed embeddings
autorag-rsearch show datasets beir # type your ingestor name to see if pre-ingested versions are available
autorag-research data restore beir beir_arguana_test_qwen_3_0.6b # example command
# 3. Configure LLM — pick or create a config in configs/llm/
vim configs/llm/openai-gpt5-mini.yaml
# You should set your embedding models in embedding/ folder if needed
# 4. Edit experiment config — choose pipelines and metrics
vim configs/experiment.yaml
# 5. Check your DB connection
vim configs/db.yaml
# 6. Run your experiment
autorag-research run --db-name=beir_scifact_test
# 7. View results in a Gradio leaderboard UI (need to load your env variable for DB connection)
python -m autorag_research.reporting.ui
```
`configs/experiment.yaml` is where you define which pipelines and metrics to run:
```yaml
db_name: beir_scifact_test
pipelines:
retrieval: [bm25, vector_search]
generation: [basic_rag]
metrics:
retrieval: [recall, ndcg]
generation: [rouge]
```
Generation pipelines (and some retrieval pipelines like HyDE) require an LLM. The `llm` field in each pipeline config references a file in `configs/llm/` by name (without `.yaml`):
```yaml
# configs/pipelines/generation/basic_rag.yaml
llm: openai-gpt5-mini # → loads configs/llm/openai-gpt5-mini.yaml
```
Pre-configured LLM options include `anthropic-claude-4.5-sonnet`, `openai-gpt5-mini`, `google-gemini-3-flash`, `ollama`, `vllm`, and more. See all options in `configs/llm/`.
For the full YAML configuration guide, see the [Documentation](https://nomadamas.github.io/AutoRAG-Research/cli/).
### Commands
| Command | Description |
|---------|-------------|
| `autorag-research init` | Download default config files to `./configs/` |
| `autorag-research show datasets` | List available pre-built datasets to download |
| `autorag-research show ingestors` | List available data ingestors and their parameters |
| `autorag-research show pipelines` | List available pipeline configurations |
| `autorag-research show metrics` | List available evaluation metrics |
| `autorag-research show databases` | List ingested database schemas |
| `autorag-research ingest --name <name>` | Ingest a dataset into PostgreSQL |
| `autorag-research drop database --db-name <name>` | Drop a PostgreSQL database quickly |
| `autorag-research run --db-name <name>` | Run experiment with configured pipelines and metrics |
You can also type `--help` in any command to see detailed usage instructions.
Also, we provide a [CLI Reference](https://nomadamas.github.io/AutoRAG-Research/cli/).
## Build Your Own Plugin
AutoRAG-Research supports a plugin system so you can add your own retrieval pipelines, generation pipelines, or evaluation metrics — and use them alongside the built-in ones in the same experiment.
A plugin is a standalone Python package. You implement your logic, register it via Python's `entry_points`, and the framework discovers and loads it automatically. No need to fork the repo or modify the core codebase.
**What you can build:**
| Plugin Type | What it does | Base Class |
|-------------|--------------|------------|
| Retrieval Pipeline | Custom search/retrieval logic | `BaseRetrievalPipeline` |
| Generation Pipeline | Custom retrieve-then-generate logic | `BaseGenerationPipeline` |
| Retrieval Metric | Custom retrieval evaluation metric | `BaseRetrievalMetricConfig` |
| Generation Metric | Custom generation evaluation metric | `BaseGenerationMetricConfig` |
**How it works:**
```bash
# 1. Scaffold — generates a ready-to-edit project with config, code, YAML, and tests
autorag-research plugin create my_search --type=retrieval
# 2. Implement — edit the generated pipeline.py (or metric.py)
cd my_search_plugin
vim src/my_search_plugin/pipeline.py
# 3. Install — register the plugin in your environment
pip install -e .
# 4. Sync — copy the plugin's YAML config into your project's configs/ directory
autorag-research plugin sync
# 5. Use — add it to experiment.yaml and run like any built-in pipeline
autorag-research run --db-name=my_dataset
```
After `plugin sync`, your plugin appears in `configs/pipelines/` or `configs/metrics/` and can be referenced in `experiment.yaml` just like any built-in component.
For the full implementation guide, see the [Plugin Documentation](https://nomadamas.github.io/AutoRAG-Research/plugins/).
## Agent Skill: Query Results with Natural Language
AutoRAG-Research ships with an [agent skill](https://vercel.com/changelog/introducing-skills-the-open-agent-skills-ecosystem) that lets AI coding agents (Claude Code, Codex, Kiro, Cursor, etc.) query your pipeline results directly from PostgreSQL using natural language.
```bash
# Install globally
npx skills add NomaDamas/AutoRAG-Research --skill autorag-query
```
> **You**: "Which pipeline has the best BLEU score?"
>
> **Agent**: "**hybrid_search_v2** achieved the highest BLEU score of **0.85**."
For detailed usage, script options, and query templates, see the [Agent Skill documentation](https://nomadamas.github.io/AutoRAG-Research/agent-skill/).
## Contributing
We are open source project and always welcome contributions who love RAG! Feel free to open issues or submit pull requests on GitHub.
You can check our [Contribution Guide](https://nomadamas.github.io/AutoRAG-Research/contributing/) for more details.
## Acknowledgements
This project is made by the creator of [AutoRAG](https://github.com/Marker-Inc-Korea/AutoRAG), Jeffrey & Bobb Kim.
All works are done in [NomaDamas](https://github.com/NomaDamas), AI Hacker House in Seoul, Korea.
| text/markdown | null | NomaDamas <vkehfdl1@gmail.com> | null | null | Apache-2.0 | python | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming L... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"beir>=2.2.0",
"bert-score>=0.3.13",
"datasets<3.0.0",
"evaluate>=0.4.6",
"httpx>=0.20.0",
"huggingface-hub>=0.36.0",
"hydra-core>=1.3.0",
"infinity-client>=0.0.77",
"langchain-core>=0.3.0",
"mteb>=1.34.0",
"nltk>=3.9.2",
"numpy>=2.2.6",
"omegaconf>=2.3.0",
"pandas>=2.3.3",
"pgvector>=0.... | [] | [] | [] | [
"Homepage, https://vkehfdl1.github.io/AutoRAG-Research/",
"Repository, https://github.com/vkehfdl1/AutoRAG-Research",
"Documentation, https://vkehfdl1.github.io/AutoRAG-Research/"
] | uv/0.9.7 | 2026-02-21T05:43:09.218492 | autorag_research-0.0.4.tar.gz | 810,479 | ac/d7/804694a723b5de7a225fd38da5350632e35f85bb7c51d46607a2e23555e0/autorag_research-0.0.4.tar.gz | source | sdist | null | false | 98a37204fef2a4a647820caac6c85ba3 | df0e1cb9fd204c08d4b4328fb2b9db99bd3121e326183c1e6bdacb20a19d43e9 | acd7804694a723b5de7a225fd38da5350632e35f85bb7c51d46607a2e23555e0 | null | [
"LICENSE"
] | 0 |
2.4 | junos-ops | 0.8.0 | Automated JUNOS package management tool for Juniper Networks devices | # junos-ops
[](https://pypi.org/project/junos-ops/)
[](https://github.com/shigechika/junos-ops/actions/workflows/ci.yml)
[](https://pypi.org/project/junos-ops/)
[日本語版 / Japanese](https://github.com/shigechika/junos-ops/blob/main/README.ja.md)
A tool for automatic detection of Juniper device models and automated JUNOS package updates.
## Features
- Automatic device model detection and package mapping
- Safe package copy via SCP with checksum verification
- Pre-install package validation
- Rollback support (model-specific handling for MX/EX/SRX)
- Scheduled reboot
- Parallel RSI (request support information) / SCF (show configuration | display set) collection
- Dry-run mode (`--dry-run`) for pre-flight verification
- Parallel execution via ThreadPoolExecutor
- Configuration push with commit confirmed safety (parallel execution supported)
- INI-based host and package management
## Table of Contents
- [Installation](#installation)
- [Configuration File (config.ini)](#configuration-file-configini)
- [Usage](#usage)
- [Workflow](#workflow)
- [Examples](#examples)
- [Supported Models](#supported-models)
- [License](#license)
## Installation
```bash
pip install junos-ops
```
To upgrade to the latest version:
```bash
pip install junos-ops --upgrade
```
### Development Setup
```bash
git clone https://github.com/shigechika/junos-ops.git
cd junos-ops
python3 -m venv .venv
. .venv/bin/activate
pip install -e ".[test]"
```
### Dependencies
- [junos-eznc (PyEZ)](https://www.juniper.net/documentation/product/us/en/junos-pyez) — Juniper NETCONF automation library
- [looseversion](https://pypi.org/project/looseversion/) — Version comparison
### Tab Completion (optional)
```bash
pip install junos-ops[completion]
eval "$(register-python-argcomplete junos-ops)"
```
Add the `eval` line to your shell profile (`~/.bashrc` or `~/.zshrc`) to enable it permanently.
### Installing pip3 (if not available)
<details>
<summary>OS-specific instructions</summary>
- **Ubuntu/Debian**
```bash
sudo apt install python3-pip
```
- **CentOS/RedHat**
```bash
sudo dnf install python3-pip
```
- **macOS**
```bash
brew install python3
```
</details>
## Configuration File (config.ini)
An INI-format configuration file that defines connection settings and model-to-package mappings.
The configuration file is searched in the following order (`-c` / `--config` can override):
1. `./config.ini` in the current directory
2. `~/.config/junos-ops/config.ini` (XDG_CONFIG_HOME)
### Logging Configuration (logging.ini)
An optional `logging.ini` file can be used to customize log output (e.g., suppress verbose paramiko/ncclient messages). The file is searched in the same order as `config.ini`:
1. `./logging.ini` in the current directory
2. `~/.config/junos-ops/logging.ini` (XDG_CONFIG_HOME)
If neither is found, the default logging configuration (INFO level to stdout) is used.
### DEFAULT Section
Defines global connection settings and model-to-package mappings shared by all hosts.
```ini
[DEFAULT]
id = exadmin # SSH username
pw = password # SSH password
sshkey = id_ed25519 # SSH private key file
port = 830 # NETCONF port
hashalgo = md5 # Checksum algorithm
rpath = /var/tmp # Remote path
# huge_tree = true # Allow large XML responses
# RSI_DIR = ./rsi/ # Output directory for RSI/SCF files
# DISPLAY_STYLE = display set # SCF output style (default: display set)
# DISPLAY_STYLE = # Empty for stanza format (show configuration only)
# model.file = package filename
# model.hash = checksum value
EX2300-24T.file = junos-arm-32-18.4R3-S10.tgz
EX2300-24T.hash = e233b31a0b9233bc4c56e89954839a8a
```
The model name must match the `model` field automatically retrieved from the device.
### Host Sections
Each section name becomes the hostname. DEFAULT values can be overridden per host.
```ini
[rt1.example.jp] # Section name is used as the connection hostname
[rt2.example.jp]
host = 192.0.2.1 # Override connection target with IP address
[sw1.example.jp]
id = sw1 # Override SSH username
sshkey = sw1_rsa # Override SSH key
[sw2.example.jp]
port = 10830 # Override port
[sw3.example.jp]
EX4300-32F.file = jinstall-ex-4300-20.4R3.8-signed.tgz # Different version for this host
EX4300-32F.hash = 353a0dbd8ff6a088a593ec246f8de4f4
```
## Usage
```
junos-ops <subcommand> [options] [hostname ...]
```
### Subcommands
| Subcommand | Description |
|------------|-------------|
| `upgrade` | Copy and install package |
| `copy` | Copy package from local to remote |
| `install` | Install a previously copied package |
| `rollback` | Rollback to the previous version |
| `version` | Show running/planning/pending versions and reboot schedule |
| `reboot --at YYMMDDHHMM` | Schedule a reboot at the specified time |
| `ls [-l]` | List files on the remote path |
| `config -f FILE [--confirm N] [--health-check CMD \| --no-health-check]` | Push a set command file to devices |
| `rsi` | Collect RSI/SCF in parallel |
| (none) | Show device facts |
### Common Options
| Option | Description |
|--------|-------------|
| `hostname` | Target hostname(s) (defaults to all hosts in config file) |
| `-c`, `--config CONFIG` | Config file path (default: `config.ini` or `~/.config/junos-ops/config.ini`) |
| `-n`, `--dry-run` | Test run (connect and display messages only, no execution) |
| `-d`, `--debug` | Debug output |
| `--force` | Force execution regardless of conditions |
| `--workers N` | Parallel workers (default: 1 for upgrade, 20 for rsi) |
| `--version` | Show program version |
## Workflow
### CLI Architecture Overview
All subcommands share the same execution pipeline: read the config file, determine target hosts, then dispatch each host to a worker thread via `ThreadPoolExecutor`. The `--workers N` option controls parallelism — defaulting to 1 for upgrade operations (safe sequential execution) and 20 for RSI collection (I/O-bound, benefits from concurrency). Each worker establishes its own NETCONF session, so hosts are processed independently with no shared state.
```mermaid
flowchart TD
A[junos-ops CLI] --> B[Read config.ini]
B --> C[Determine target hosts]
C --> D{Subcommand}
D --> E[upgrade / copy / install]
D --> F[version / rollback / reboot]
D --> G[config / show / ls]
D --> H[rsi]
D --> I["(none) → facts"]
E & F & G & H & I --> J["ThreadPoolExecutor<br/>--workers N"]
J --> K["NETCONF / SCP<br/>per host"]
K --> L[Results]
```
### JUNOS Upgrade Workflow
A firmware upgrade follows a four-step sequence designed to minimize risk. First, `dry-run` verifies connectivity, package availability, and checksum without making changes. Then `upgrade` copies and installs the package. `version` confirms the pending version matches expectations before scheduling the reboot. The reboot is scheduled separately so you can choose a maintenance window. If anything goes wrong, `rollback` reverts to the previous firmware at any point before reboot.
```mermaid
flowchart TD
A["1. dry-run<br/>junos-ops upgrade -n"] --> B["2. upgrade<br/>junos-ops upgrade"]
B --> C["3. version<br/>junos-ops version"]
C --> D["4. reboot<br/>junos-ops reboot --at"]
D -.->|"if problems"| E["rollback<br/>junos-ops rollback"]
```
```
1. Pre-flight check with dry-run
junos-ops upgrade -n hostname
2. Copy and install with upgrade
junos-ops upgrade hostname
3. Verify version
junos-ops version hostname
4. Schedule reboot
junos-ops reboot --at 2506130500 hostname
```
Use `rollback` to revert to the previous version if problems occur.
### Upgrade Internal Flow
The `upgrade` subcommand runs multiple safety checks before and during the update process. It first compares the running version against the target — skipping entirely if already up to date. If a different pending version exists, it rolls that back before proceeding. The copy phase frees disk space (storage cleanup + snapshot delete on EX/QFX), then transfers the package via `safe_copy` with checksum verification to detect corruption. Before installing, it clears any existing reboot schedule and saves the rescue config as a recovery baseline. Finally, `sw.install()` validates the package integrity on the device before applying it.
```mermaid
flowchart TD
A[NETCONF connect] --> B{"Running version<br/>= target?"}
B -->|yes| C([Skip — already up to date])
B -->|no| D{"Pending version<br/>exists?"}
D -->|no| E[copy]
D -->|yes| F{Pending ≥ target?}
F -->|yes, no --force| C
F -->|no / --force| G[Rollback pending version]
G --> E
subgraph copy ["copy()"]
E --> H[Storage cleanup]
H --> I["Snapshot delete<br/>(EX/QFX only)"]
I --> J["safe_copy via SCP<br/>+ checksum verification"]:::safe
end
J --> K[Clear reboot schedule]
K --> L[Save rescue config]:::safe
L --> M["sw.install()<br/>validate + checksum"]:::install
M --> N([Done — reboot when ready])
classDef safe fill:#d4edda,stroke:#28a745,color:#000
classDef install fill:#cce5ff,stroke:#007bff,color:#000
```
### Reboot Safety Flow
Before scheduling a reboot, `reboot` automatically checks whether the configuration was modified after the firmware install. If changes are detected, it re-saves the rescue config and re-installs with validation to ensure the new firmware is compatible with the current config.
```mermaid
flowchart TD
A[NETCONF connect] --> B{"Existing reboot<br/>schedule?"}
B -->|no| D
B -->|yes| C{--force?}
C -->|no| B2([Skip — keep existing schedule])
C -->|yes| CL[Clear existing schedule] --> D
D{"Pending version<br/>exists?"} -->|no| SCH
D -->|yes| E[Get last commit time]
E --> F[Get rescue config time]
F --> G{"Config modified<br/>after install?"}
G -->|no| SCH
G -->|yes| H[Re-save rescue config]:::warned
H --> I["Re-install firmware<br/>(validate + checksum)"]:::install
I -->|success| SCH
I -->|failure| ERR([Abort — do not reboot]):::errstyle
SCH["Schedule reboot at<br/>--at YYMMDDHHMM"]:::safe
classDef safe fill:#d4edda,stroke:#28a745,color:#000
classDef install fill:#cce5ff,stroke:#007bff,color:#000
classDef warned fill:#fff3cd,stroke:#ffc107,color:#000
classDef errstyle fill:#f8d7da,stroke:#dc3545,color:#000
```
### Config Push Workflow
The `config` subcommand uses a three-phase commit flow: `commit confirmed` (auto-rollback timer) → **health check** → `commit` (permanent). If the health check fails, the final `commit` is withheld and JUNOS automatically rolls back the change when the timer expires — no manual intervention required.
By default, `ping count 3 8.8.8.8 rapid` is executed as the health check. Use `--health-check` to specify a custom command, or `--no-health-check` to skip the check entirely.
| Option | Description |
|--------|-------------|
| `--health-check CMD` | Custom health check command (default: `"ping count 3 8.8.8.8 rapid"`) |
| `--no-health-check` | Skip health check after commit confirmed |
| `--confirm N` | Commit confirmed timeout in minutes (default: 1) |
```mermaid
flowchart TD
A[lock config] --> B[load set commands]
B --> C{diff}
C -->|no changes| D[unlock]
C -->|changes found| E{dry-run?}
E -->|yes| F["print diff<br/>rollback"] --> D
E -->|no| G[commit check]
G --> H["commit confirmed N<br/>(auto-rollback timer)"]:::warned
H --> HC{"health check<br/>(default: ping 8.8.8.8)"}
HC -->|pass| I["commit<br/>changes permanent"]:::safe
HC -->|fail| AR["withhold commit<br/>→ auto-rollback<br/>in N minutes"]:::errstyle
AR --> D
I --> D
G -->|error| J[rollback + unlock]:::errstyle
H -->|error| J
classDef warned fill:#fff3cd,stroke:#ffc107,color:#000
classDef safe fill:#d4edda,stroke:#28a745,color:#000
classDef errstyle fill:#f8d7da,stroke:#dc3545,color:#000
```
The health check determines success as follows:
- **ping commands** (`ping ...`): parse the output for `N packets received` — success if N > 0
- **Other commands** (`show ...`, etc.): success if the command executes without exception
```
1. Preview changes with dry-run
junos-ops config -f commands.set -n hostname
2. Apply changes (with default ping health check)
junos-ops config -f commands.set hostname
3. Apply with custom health check
junos-ops config -f commands.set --health-check "ping count 5 10.0.0.1 rapid" hostname
4. Apply without health check
junos-ops config -f commands.set --no-health-check hostname
```
## Examples
### upgrade (package update)
```
% junos-ops upgrade rt1.example.jp
# rt1.example.jp
remote: jinstall-ppc-18.4R3-S10-signed.tgz is not found.
copy: system storage cleanup successful
rt1.example.jp: cleaning filesystem ...
rt1.example.jp: b'jinstall-ppc-18.4R3-S10-signed.tgz': 380102074 / 380102074 (100%)
rt1.example.jp: checksum check passed.
install: clear reboot schedule successful
install: rescue config save successful
rt1.example.jp: software validate package-result: 0
```
### version (version check)
```
% junos-ops version rt1.example.jp
# rt1.example.jp
- hostname: rt1
- model: MX5-T
- running version: 18.4R3-S7.2
- planning version: 18.4R3-S10
- running='18.4R3-S7.2' < planning='18.4R3-S10'
- pending version: 18.4R3-S10
- running='18.4R3-S7.2' < pending='18.4R3-S10' : Please plan to reboot.
- reboot requested by exadmin at Sat Dec 4 05:00:00 2021
```
### rsi (parallel RSI/SCF collection)
```
% junos-ops rsi --workers 5 rt1.example.jp rt2.example.jp
# rt1.example.jp
rt1.example.jp.SCF done
rt1.example.jp.RSI done
# rt2.example.jp
rt2.example.jp.SCF done
rt2.example.jp.RSI done
```
### reboot (scheduled reboot)
```
% junos-ops reboot --at 2506130500 rt1.example.jp
# rt1.example.jp
Shutdown at Fri Jun 13 05:00:00 2025. [pid 97978]
```
### config (push set command file)
Push a set-format command file to multiple devices. Uses a safe commit flow: commit check, commit confirmed, then confirm.
```
% cat add-user.set
set system login user viewer class read-only
set system login user viewer authentication ssh-ed25519 "ssh-ed25519 AAAA..."
% junos-ops config -f add-user.set -n rt1.example.jp rt2.example.jp
# rt1.example.jp
[edit system login]
+ user viewer {
+ class read-only;
+ authentication {
+ ssh-ed25519 "ssh-ed25519 AAAA...";
+ }
+ }
dry-run: rollback (no commit)
# rt2.example.jp
...
% junos-ops config -f add-user.set rt1.example.jp rt2.example.jp
# rt1.example.jp
...
commit check passed
commit confirmed 1 applied
health check: ping count 3 8.8.8.8 rapid
health check passed (3 packets received)
commit confirmed, changes are now permanent
# rt2.example.jp
...
```
Use `--confirm N` to change the commit confirmed timeout (default: 1 minute). Use `--no-health-check` to skip the post-commit health check.
Set files can include `#` comments and blank lines — they are automatically stripped before sending to the device.
### show (run CLI command)
Run an arbitrary CLI command across multiple devices in parallel.
```
% junos-ops show "show bgp summary" -c accounts.ini gw1.example.jp gw2.example.jp
# gw1.example.jp
Groups: 4 Peers: 6 Down peers: 0
...
# gw2.example.jp
Groups: 3 Peers: 4 Down peers: 0
...
```
Use `-f` to run multiple commands from a file within a single NETCONF session per device:
```
% cat commands.txt
# security policy check
show security policies hit-count
show security flow session summary
% junos-ops show -f commands.txt -c accounts.ini fw1.example.jp
# fw1.example.jp
## show security policies hit-count
...
## show security flow session summary
...
```
> **Note:** JUNOS CLI pipe filters (`| match`, `| count`, etc.) are not supported. PyEZ's `dev.cli()` sends commands via NETCONF RPC, which does not process pipe modifiers. Filter output with shell tools (e.g. `grep`) instead.
### No subcommand (show device facts)
```
% junos-ops gw1.example.jp
# gw1.example.jp
{'2RE': True,
'hostname': 'gw1',
'model': 'MX240',
'version': '18.4R3-S7.2',
...}
```
## Supported Models
Any Juniper model can be supported by defining the model name and package file in the configuration file. Models included in the example configuration:
| Series | Example Models |
|--------|---------------|
| EX | EX2300-24T, EX3400-24T, EX4300-32F |
| MX | MX5-T, MX240 |
| QFX | QFX5110-48S-4C |
| SRX | SRX300, SRX345, SRX1500, SRX4600 |
## License
[Apache License 2.0](LICENSE)
Copyright 2022-2025 AIKAWA Shigechika
| text/markdown | AIKAWA Shigechika | null | null | null | null | juniper, junos, netconf, pyez, network, automation | [
"Development Status :: 4 - Beta",
"Intended Audience :: System Administrators",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: System :: Networking",
"Topic :: System :: Systems ... | [] | null | null | >=3.12 | [] | [] | [] | [
"junos-eznc",
"looseversion",
"pytest; extra == \"test\"",
"pytest-cov; extra == \"test\"",
"argcomplete; extra == \"completion\""
] | [] | [] | [] | [
"Homepage, https://github.com/shigechika/junos-ops",
"Repository, https://github.com/shigechika/junos-ops",
"Issues, https://github.com/shigechika/junos-ops/issues",
"Changelog, https://github.com/shigechika/junos-ops/blob/main/CHANGELOG.md"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:42:57.811256 | junos_ops-0.8.0.tar.gz | 42,752 | e7/52/8afa65c6c1bbac03cdafdcf3d120bab00c4742427d1103fad35bafbcbd69/junos_ops-0.8.0.tar.gz | source | sdist | null | false | 151d7834d3072f28848998cef0bb4775 | f77d33e751c96cb33f66b43e3989dd4d9dcc93a0c223c8b5a06c37091a530f99 | e7528afa65c6c1bbac03cdafdcf3d120bab00c4742427d1103fad35bafbcbd69 | Apache-2.0 | [
"LICENSE"
] | 217 |
2.4 | opensymbolicai-core | 0.4.2 | OpenSymbolicAI Core | # OpenSymbolicAI Core (Python)
<p align="center">
<img src="assets/demo.gif" alt="OpenSymbolicAI Demo" width="800">
</p>
**Make AI a software engineering discipline.**
## Why This Architecture?
**LLMs are untrusted.** They're stochastic, may be trained on poisoned data, and change under the hood without notice. The more tokens they produce, the further they drift. More instructions often make things *worse*.
**Current orchestration is risky.** Most agent frameworks dump instructions and data together in the context window, then let the LLM loop freely:
```
Instructions + Data + Tools → LLM → Tool call → Output → LLM → Tool call → ...
```
This creates injection risks: data can masquerade as instructions, like SQL injection attacks. And since LLMs are autoregressive, the more context you add, the less reliable they become.
**OpenSymbolicAI separates concerns:**
| Problem | How We Solve It |
|---------|-----------------|
| Data influences planning unpredictably | **Planning is isolated.** LLM sees only the query and primitive signatures—not your data |
| LLM can make unplanned tool calls | **Execution is deterministic.** LLM is a leaf node—it plans, then execution happens without LLM in the loop |
| Prompt injection and data exfiltration | **Symbolic Firewall.** LLM operates on variable names, not raw content. Data stays in application memory, never tokenized. [Learn more](https://www.opensymbolic.ai/blog/security-by-design) |
| Side effects are hidden | **Mutations are explicit.** `read_only=False` primitives trigger approval hooks before execution |
| Outputs are unpredictable JSON/markdown | **Outputs are typed.** Pydantic models guarantee structured, validated results |
| Long contexts cause drift | **Context is minimal.** Only what's needed goes to the LLM—faster, cheaper, more reliable |
| Model changes break prompts | **Model-agnostic.** Constrained inputs/outputs minimize variability across models |
| Failures lose progress | **Checkpoint system.** Pause/resume execution across distributed workers with full state serialization |
| Hard to debug what happened | **Full tracing.** Before/after namespace snapshots, argument expressions, resolved values, timing—every step recorded |
> **Thesis:** Stop *prompting*. Start *programming*.
---
## What This Repo Is
`core-py` is the **Python runtime for OpenSymbolicAI**: the core primitives and execution model for building LLM-powered systems as *software*, not as a pile of strings.
**Core concepts:**
- **Primitives** (`@primitive`) - Atomic operations your agent can execute
- **Decompositions** (`@decomposition`) - Examples showing how to break complex intents into primitive sequences
- **Evaluators** (`@evaluator`) - Goal evaluation methods for iterative agents
**Blueprints** (pick the one that fits your problem):
| Blueprint | When to Use |
|-----------|-------------|
| **PlanExecute** | Single-turn tasks with a fixed sequence of primitives |
| **DesignExecute** | Tasks needing loops and conditionals (dynamic-length data) |
| **GoalSeeking** | Iterative problems where progress is evaluated each step |
**Related:** [opensymbolicai-cli](https://github.com/OpenSymbolicAI/cli-py) — Interactive TUI for discovering and running agents
---
## Why "Prompt → Code" Matters
| Prompts as strings | Prompts as code |
|-------------------|-----------------|
| Hard to reproduce | **Version** behavior, not just text |
| Hard to review | **Diff** and code review changes |
| Brittle or no tests | **Test** expectations (unit + integration) |
| "Model mood" mysteries | **Debug** with execution traces |
| Copy-paste reuse | **Compose** as reusable modules |
---
## Quickstart
### 1. Install
```bash
pip install opensymbolicai-core # from PyPI
# or for development:
uv sync
```
### 2. Configure environment
```bash
cp .env.example .env
# Add your API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)
```
### 3. Run an example
```bash
cd examples/calculator
uv run python run_calculator.py # uses gpt-oss:20b by default
uv run python run_calculator.py qwen3:1.7b # specify a model
uv run python run_calculator.py qwen3:1.7b -v # verbose mode (shows plans)
```
---
## Example: Scientific Calculator Agent
```python
from opensymbolicai import PlanExecute, primitive, decomposition
class ScientificCalculator(PlanExecute):
@primitive(read_only=True)
def add_numbers(self, a: float, b: float) -> float:
"""Add two numbers together."""
return a + b
@primitive(read_only=True)
def convert_degrees_to_radians(self, angle: float) -> float:
"""Convert degrees to radians."""
return angle * 3.14159 / 180
@decomposition(
intent="What is sine of 90 degrees?",
expanded_intent="Convert to radians, then calculate sine",
)
def _example_sine(self) -> float:
rad = self.convert_degrees_to_radians(angle=90)
return self.sine(angle_in_radians=rad)
```
The LLM learns from decomposition examples to plan new queries using your primitives.
---
## Example: Shopping Cart Agent (DesignExecute)
When tasks involve dynamic-length data, you need loops and conditionals. `DesignExecute` extends `PlanExecute` with control flow support and loop guards to prevent runaway execution.
```python
from opensymbolicai import DesignExecute, primitive, decomposition
class ShoppingCart(DesignExecute):
@primitive(read_only=True)
def lookup_price(self, item: str) -> float:
"""Look up the unit price of an item from the catalog."""
return CATALOG[item.lower()]
@primitive(read_only=True)
def apply_discount(self, price: float, percent: float) -> float:
"""Apply a percentage discount to a price."""
return round(price * (1 - percent / 100), 2)
@decomposition(
intent="I need 5 apples and 1 laptop shipped to California",
expanded_intent="Loop over items, apply bulk discounts, add state tax",
)
def _example_cart(self) -> float:
cart = [("apples", 5), ("laptop", 1)]
subtotal = 0.0
for raw_name, qty in cart:
price = self.lookup_price(item=raw_name)
line = self.multiply(price=price, quantity=qty)
if qty >= 3:
line = self.apply_discount(price=line, percent=10.0)
subtotal = self.add(a=subtotal, b=line)
tax_rate = self.lookup_tax_rate(state="CA")
return self.add_tax(subtotal=subtotal, rate=tax_rate)
```
The LLM generates plans with `for` loops and `if` statements. Loop guards automatically prevent infinite loops.
---
## Example: Function Optimizer (GoalSeeking)
For iterative problems where you can't solve it in one shot, `GoalSeeking` runs a plan-execute-evaluate loop until the goal is achieved.
```python
from opensymbolicai import GoalSeeking, primitive, evaluator, decomposition
from opensymbolicai import GoalContext, GoalEvaluation
class FunctionOptimizer(GoalSeeking):
@primitive(read_only=True)
def evaluate(self, x: float) -> float:
"""Evaluate the mystery function at point x."""
return round(target_function(x), 6)
@evaluator
def check_converged(self, goal: str, context: GoalContext) -> GoalEvaluation:
"""Goal is achieved when we find a value close to the true maximum."""
return GoalEvaluation(goal_achieved=context.converged)
@decomposition(
intent="Explore the function across the range",
expanded_intent="Sample spread-out points to understand the function shape",
)
def _example_explore(self) -> float:
v1 = self.evaluate(x=3.0)
v2 = self.evaluate(x=8.0)
v3 = self.evaluate(x=14.0)
return v3
```
Each iteration: **plan** (pick sample points) → **execute** (call primitives) → **introspect** (extract knowledge into context) → **evaluate** (check goal). The LLM never sees raw execution results—only structured `GoalContext`.
---
## Auto-Documented Type Definitions
When primitives use Pydantic models as parameters or return types, the LLM prompt automatically includes a **Type Definitions** section listing each model's fields and types. This eliminates guesswork — the LLM knows exactly which attributes to use.
```python
from pydantic import BaseModel
from opensymbolicai import DesignExecute, primitive
class Flight(BaseModel):
flight_number: str
price: float
origin: str
destination: str
class TravelAgent(DesignExecute):
@primitive(read_only=True)
def search_flights(self, origin: str, destination: str) -> list[Flight]:
"""Search for available flights."""
...
```
The generated prompt will include:
```
## Type Definitions
Flight(flight_number: str, price: float, origin: str, destination: str)
```
This works across all blueprints (PlanExecute, DesignExecute, GoalSeeking) and handles generic types — `list[Flight]`, `Flight | None`, `Optional[Flight]`, `Union[Flight, Hotel]` are all unwrapped to discover the underlying models. Models are deduplicated and sorted alphabetically.
---
## Structured Exceptions
Primitives can raise typed exceptions that are captured in the execution trace:
```python
from opensymbolicai import ValidationError, PreconditionError, RetryableError
@primitive(read_only=True)
def divide(self, a: float, b: float) -> float:
if b == 0:
raise PreconditionError("Cannot divide by zero", code="DIVISION_BY_ZERO")
return a / b
```
| Exception | Use Case |
|-----------|----------|
| `ValidationError` | Invalid inputs, out-of-range values |
| `PreconditionError` | Missing prerequisites (division by zero, empty collection) |
| `ResourceError` | Unavailable external resources (DB, API, file) |
| `OperationError` | Runtime failures during execution |
| `RetryableError` | Transient errors (rate limits, timeouts) — does not halt execution |
All exceptions serialize to dict for trace persistence and carry optional `code` and `details` fields.
---
## Supported Providers
Ollama, OpenAI, Anthropic, Fireworks, Groq, or add your own.
---
## Benchmarks
Run the calculator benchmark to evaluate model performance:
```bash
uv run python benchmarks/calculator/benchmark.py # all models
uv run python benchmarks/calculator/benchmark.py --models qwen3:1.7b # specific model
uv run python benchmarks/calculator/benchmark.py --limit 20 -v # quick test, verbose
```
See [benchmarks/calculator/README.md](benchmarks/calculator/README.md) for full options (parallel execution, categories, JSON export).
### Model Recommendations (Ollama)
| Model | Accuracy | Notes |
|-------|----------|-------|
| `gpt-oss:20b` | 100% | Best accuracy, larger model |
| `qwen3:1.7b` | 100% | Best balance of accuracy & size |
| `qwen3:8b` | 100% | Perfect accuracy |
| `gemma3:4b` | 94% | Tested on 120 intents |
| `phi4:14b` | 80% | Strong, larger model |
**Recommendations:**
- **Primary choice:** `qwen3:1.7b` - fast, accurate, small footprint
- **Higher accuracy:** `gemma3:4b` - proven on larger test set
- **Best accuracy:** `gpt-oss:20b` or `qwen3:8b` - 100% on all tests
---
## Development
### Pre-commit hooks
```bash
uv run pre-commit install # one-time
uv run pre-commit run --all-files # run manually
```
### Commands
```bash
uv run ruff check . # lint
uv run ruff check --fix . # lint + autofix
uv run mypy src # type-check
uv run pytest # run tests
```
---
## Repository Structure
```
src/opensymbolicai/
├── core.py # @primitive, @decomposition, @evaluator decorators
├── models.py # Pydantic models (configs, traces, results)
├── llm.py # Multi-provider LLM abstraction
├── checkpoint.py # Distributed execution & state serialization
├── exceptions.py # Structured exception hierarchy
└── blueprints/
├── plan_execute.py # PlanExecute — single-turn plan & execute
├── design_execute.py # DesignExecute — adds loops & conditionals
└── goal_seeking.py # GoalSeeking — iterative plan-execute-evaluate
examples/
├── calculator/ # Scientific calculator (PlanExecute)
├── shopping_cart/ # Shopping cart with tax (DesignExecute)
└── function_optimizer/ # Black-box optimization (GoalSeeking)
tests/ # Unit tests
integration_tests/ # Integration tests (requires LLM)
benchmarks/ # Performance benchmarks
docs/ # MkDocs documentation
```
---
## Contributing
PRs welcome. Please include:
- Unit test in `tests/`
- Integration test in `integration_tests/` (when relevant)
- Benchmark if it impacts runtime-critical paths
---
## License
MIT
| text/markdown | OpenSymbolicAI | null | null | null | null | opensymbolicai, symbolic-ai, runtime | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic>=2.0",
"tqdm>=4.67.1"
] | [] | [] | [] | [
"Homepage, https://github.com/OpenSymbolicAI/core-py",
"Repository, https://github.com/OpenSymbolicAI/core-py"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T05:42:53.607708 | opensymbolicai_core-0.4.2.tar.gz | 77,824 | 98/4e/321fcc4956ee750fe1ba1ff1f15ab5c57f321ce0da9bd81f052a86321c3f/opensymbolicai_core-0.4.2.tar.gz | source | sdist | null | false | 5958df83a167a8915e8354dad3425c10 | 0ff8897644109cdad6cd3d890ef6c24dac0f11381bab11334c4316eee1ddfc6c | 984e321fcc4956ee750fe1ba1ff1f15ab5c57f321ce0da9bd81f052a86321c3f | MIT | [
"LICENSE"
] | 218 |
2.4 | primordial-agentstore | 0.3.0 | Primordial AgentStore CLI — discover, install, and run AI agents | # Primordial
CLI for the Primordial AgentStore — discover, install, and run AI agents.
## Install
```bash
pip install primordial-agentstore
```
## Usage
```bash
primordial --help
```
| text/markdown | Andy Browning | null | null | null | null | agents, ai, cli, llm | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Librari... | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.1",
"cryptography>=42.0",
"e2b>=1.0",
"httpx>=0.27",
"platformdirs>=4.0",
"pydantic>=2.0",
"pyyaml>=6.0",
"rich>=13.0",
"pytest-asyncio>=0.23; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.5; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/andybrowning/primordial",
"Repository, https://github.com/andybrowning/primordial"
] | twine/6.2.0 CPython/3.11.9 | 2026-02-21T05:42:37.649889 | primordial_agentstore-0.3.0.tar.gz | 30,123 | 1d/0d/1f29a0364c9befbd1700a6bb253c24df6213ef9c65ef0f92429ed0872ac2/primordial_agentstore-0.3.0.tar.gz | source | sdist | null | false | 86d58f9ae202b9bc0d185934b7cddc2a | a8a1bed842eac3dc55b587b8a67b0e3330e86e8e48cdfae12047591e9cdba78f | 1d0d1f29a0364c9befbd1700a6bb253c24df6213ef9c65ef0f92429ed0872ac2 | MIT | [] | 208 |
2.4 | mcp-memory-service | 10.17.2 | Open-source persistent memory for AI agent pipelines and Claude. REST API + semantic search + knowledge graph + autonomous consolidation. Self-host, zero cloud cost. | # mcp-memory-service
## Persistent Shared Memory for AI Agent Pipelines
Open-source memory backend for multi-agent systems.
Agents store decisions, share causal knowledge graphs, and retrieve
context in 5ms — without cloud lock-in or API costs.
**Works with LangGraph · CrewAI · AutoGen · any HTTP client · Claude Desktop**
---
[](https://opensource.org/licenses/Apache-2.0)
[](https://pypi.org/project/mcp-memory-service/)
[](https://pypi.org/project/mcp-memory-service/)
[](https://github.com/doobidoo/mcp-memory-service/stargazers)
[](https://github.com/langchain-ai/langgraph)
[](https://crewai.com)
[](https://github.com/microsoft/autogen)
[](https://claude.ai)
[](https://cursor.sh)
---
## Why Agents Need This
| Without mcp-memory-service | With mcp-memory-service |
|---|---|
| Each agent run starts from zero | Agents retrieve prior decisions in 5ms |
| Memory is local to one graph/run | Memory is shared across all agents and runs |
| You manage Redis + Pinecone + glue code | One self-hosted service, zero cloud cost |
| No causal relationships between facts | Knowledge graph with typed edges (causes, fixes, contradicts) |
| Context window limits create amnesia | Autonomous consolidation compresses old memories |
**Key capabilities for agent pipelines:**
- **Framework-agnostic REST API** — 15 endpoints, no MCP client library needed
- **Knowledge graph** — agents share causal chains, not just facts
- **`X-Agent-ID` header** — auto-tag memories by agent identity for scoped retrieval
- **`conversation_id`** — bypass deduplication for incremental conversation storage
- **SSE events** — real-time notifications when any agent stores or deletes a memory
- **Embeddings run locally via ONNX** — memory never leaves your infrastructure
## Agent Quick Start
```bash
pip install mcp-memory-service
MCP_ALLOW_ANONYMOUS_ACCESS=true memory server --http
# REST API running at http://localhost:8000
```
```python
import httpx
BASE_URL = "http://localhost:8000"
# Store — auto-tag with X-Agent-ID header
async with httpx.AsyncClient() as client:
await client.post(f"{BASE_URL}/api/memories", json={
"content": "API rate limit is 100 req/min",
"tags": ["api", "limits"],
}, headers={"X-Agent-ID": "researcher"})
# Stored with tags: ["api", "limits", "agent:researcher"]
# Search — scope to a specific agent
results = await client.post(f"{BASE_URL}/api/memories/search", json={
"query": "API rate limits",
"tags": ["agent:researcher"],
})
print(results.json()["memories"])
```
**Framework-specific guides:** [docs/agents/](docs/agents/)
## Comparison with Alternatives
| | Mem0 | Zep | DIY Redis+Pinecone | **mcp-memory-service** |
|---|---|---|---|---|
| License | Proprietary | Enterprise | — | **Apache 2.0** |
| Cost | Per-call API | Enterprise | Infra costs | **$0** |
| Framework integration | SDK | SDK | Manual | **REST API (any HTTP client)** |
| Knowledge graph | No | Limited | No | **Yes (typed edges)** |
| Auto consolidation | No | No | No | **Yes (decay + compression)** |
| On-premise embeddings | No | No | Manual | **Yes (ONNX, local)** |
| Privacy | Cloud | Cloud | Partial | **100% local** |
| Hybrid search | No | Yes | Manual | **Yes (BM25 + vector)** |
| MCP protocol | No | No | No | **Yes** |
| REST API | Yes | Yes | Manual | **Yes (15 endpoints)** |
---
## Stop Re-Explaining Your Project to AI Every Session
<p align="center">
<img width="240" alt="MCP Memory Service" src="https://github.com/user-attachments/assets/eab1f341-ca54-445c-905e-273cd9e89555" />
</p>
Your AI assistant forgets everything when you start a new chat. After 50 tool uses, context explodes to 500k+ tokens—Claude slows down, you restart, and now it remembers nothing. You spend 10 minutes re-explaining your architecture. **Again.**
**MCP Memory Service solves this.**
It automatically captures your project context, architecture decisions, and code patterns. When you start fresh sessions, your AI already knows everything—no re-explaining, no context loss, no wasted time.
## 🎥 2-Minute Video Demo
<div align="center">
<a href="https://www.youtube.com/watch?v=veJME5qVu-A">
<img src="https://img.youtube.com/vi/veJME5qVu-A/maxresdefault.jpg" alt="MCP Memory Service Demo" width="700">
</a>
<p><em>Technical showcase: Performance, Architecture, AI/ML Intelligence & Developer Experience</em></p>
</div>
### ⚡ Works With Your Favorite AI Tools
#### 🤖 Agent Frameworks (REST API)
**LangGraph** · **CrewAI** · **AutoGen** · **Any HTTP Client** · **OpenClaw/Nanobot** · **Custom Pipelines**
#### 🖥️ CLI & Terminal AI (MCP)
**Claude Code** · **Gemini Code Assist** · **Aider** · **GitHub Copilot CLI** · **Amp** · **Continue** · **Zed** · **Cody**
#### 🎨 Desktop & IDE (MCP)
**Claude Desktop** · **VS Code** · **Cursor** · **Windsurf** · **Raycast** · **JetBrains** · **Sourcegraph** · **Qodo**
#### 💬 Chat Interfaces (MCP)
**ChatGPT** (Developer Mode) · **Claude Web**
**Works seamlessly with any MCP-compatible client or HTTP client** - whether you're building agent pipelines, coding in the terminal, IDE, or browser.
> **💡 NEW**: ChatGPT now supports MCP! Enable Developer Mode to connect your memory service directly. [See setup guide →](https://github.com/doobidoo/mcp-memory-service/discussions/377#discussioncomment-15605174)
---
## 🚀 Get Started in 60 Seconds
**Express Install** (recommended for most users):
```bash
pip install mcp-memory-service
# Auto-configure for Claude Desktop (macOS/Linux)
python -m mcp_memory_service.scripts.installation.install --quick
```
**What just happened?**
- ✅ Installed memory service
- ✅ Configured optimal backend (SQLite)
- ✅ Set up Claude Desktop integration
- ✅ Enabled automatic context capture
**Next:** Restart Claude Desktop. Your AI now remembers everything across sessions.
<details>
<summary><strong>📦 Alternative: PyPI + Manual Configuration</strong></summary>
```bash
pip install mcp-memory-service
```
Then add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS):
```json
{
"mcpServers": {
"memory": {
"command": "memory",
"args": ["server"]
}
}
}
```
</details>
<details>
<summary><strong>🔧 Advanced: Custom Backends & Team Setup</strong></summary>
For production deployments, team collaboration, or cloud sync:
```bash
git clone https://github.com/doobidoo/mcp-memory-service.git
cd mcp-memory-service
python scripts/installation/install.py
```
Choose from:
- **SQLite** (local, fast, single-user)
- **Cloudflare** (cloud, multi-device sync)
- **Hybrid** (best of both: 5ms local + background cloud sync)
</details>
---
## 💡 Why You Need This
### The Problem
| Session 1 | Session 2 (Fresh Start) |
|-----------|-------------------------|
| You: "We're building a Next.js app with Prisma and tRPC" | AI: "What's your tech stack?" ❌ |
| AI: "Got it, I see you're using App Router" | You: *Explains architecture again for 10 minutes* 😤 |
| You: "Add authentication with NextAuth" | AI: "Should I use Pages Router or App Router?" ❌ |
### The Solution
| Session 1 | Session 2 (Fresh Start) |
|-----------|-------------------------|
| You: "We're building a Next.js app with Prisma and tRPC" | AI: "I remember—Next.js App Router with Prisma and tRPC. What should we build?" ✅ |
| AI: "Got it, I see you're using App Router" | You: "Add OAuth login" |
| You: "Add authentication with NextAuth" | AI: "I'll integrate NextAuth with your existing Prisma setup." ✅ |
**Result:** Zero re-explaining. Zero context loss. Just continuous, intelligent collaboration.
---
## 🌐 SHODH Ecosystem Compatibility
MCP Memory Service is **fully compatible** with the [SHODH Unified Memory API Specification v1.0.0](https://github.com/varun29ankuS/shodh-memory/blob/main/specs/openapi.yaml), enabling seamless interoperability across the SHODH ecosystem.
### Compatible Implementations
| Implementation | Backend | Embeddings | Use Case |
|----------------|---------|------------|----------|
| **[shodh-memory](https://github.com/varun29ankuS/shodh-memory)** | RocksDB | MiniLM-L6-v2 (ONNX) | Reference implementation |
| **[shodh-cloudflare](https://github.com/doobidoo/shodh-cloudflare)** | Cloudflare Workers + Vectorize | Workers AI (bge-small) | Edge deployment, multi-device sync |
| **mcp-memory-service** (this) | SQLite-vec / Hybrid | MiniLM-L6-v2 (ONNX) | Desktop AI assistants (MCP) |
### Unified Schema Support
All SHODH implementations share the same memory schema:
- ✅ **Emotional Metadata**: `emotion`, `emotional_valence`, `emotional_arousal`
- ✅ **Episodic Memory**: `episode_id`, `sequence_number`, `preceding_memory_id`
- ✅ **Source Tracking**: `source_type`, `credibility`
- ✅ **Quality Scoring**: `quality_score`, `access_count`, `last_accessed_at`
**Interoperability Example:**
Export memories from mcp-memory-service → Import to shodh-cloudflare → Sync across devices → Full fidelity preservation of emotional_valence, episode_id, and all spec fields.
---
## ✨ Quick Start Features
🧠 **Persistent Memory** – Context survives across sessions with semantic search
🔍 **Smart Retrieval** – Finds relevant context automatically using AI embeddings
⚡ **5ms Speed** – Instant context injection, no latency
🔄 **Multi-Client** – Works across 13+ AI applications
☁️ **Cloud Sync** – Optional Cloudflare backend for team collaboration
🔒 **Privacy-First** – Local-first, you control your data
📊 **Web Dashboard** – Visualize and manage memories at `http://localhost:8000`
🧬 **Knowledge Graph** – Interactive D3.js visualization of memory relationships 🆕
### 🖥️ Dashboard Preview (v9.3.0)
<p align="center">
<img src="https://raw.githubusercontent.com/wiki/doobidoo/mcp-memory-service/images/dashboard/mcp-memory-dashboard-v9.3.0-tour.gif" alt="MCP Memory Dashboard Tour" width="800"/>
</p>
**8 Dashboard Tabs:** Dashboard • Search • Browse • Documents • Manage • Analytics • **Quality** (NEW) • API Docs
📖 See [Web Dashboard Guide](https://github.com/doobidoo/mcp-memory-service/wiki/Web-Dashboard-Guide) for complete documentation.
---
## 🆕 Latest Release: **v10.17.2** (February 21, 2026)
**CI Stability Fixes: uv CLI Test Timeout + Root Installer Test Skip**
**What's New:**
- **uv CLI test timeout increased to 120s**: `test_memory_command_exists` and `test_memory_server_command_exists` now use 120-second timeouts (up from 60s) to prevent flaky failures on CI cold-cache runs where `uv run memory --help` must resolve the full dependency graph (#486).
- **CI job timeout doubled to 20 minutes**: The `test-uvx-compatibility` workflow job timeout increased from 10 to 20 minutes to accommodate extended test timeouts on slow CI runners.
- **Root `install.py` tests skip gracefully**: Tests patching `_pip_available` / `_install_python_packages` in the root redirector now skip with a clear message instead of raising `AttributeError` — the root `install.py` is a dispatcher and does not contain those helpers.
**Previous Releases**:
- **v10.17.1** - Hook System Bug Fixes + Root Installer + Session-Start Reliability (session-end SyntaxError on Node.js v24, MCP_HTTP_PORT detection, exponential backoff retry)
- **v10.17.0** - Default "untagged" Tag for All Tagless Memories + Cleanup Script (306 production memories retroactively fixed)
- **v10.16.1** - Windows MCP Initialization Timeout Fix (`MCP_INIT_TIMEOUT` env override, 7 unit tests)
- **v10.16.0** - Agentic AI Market Repositioning with REST API Integration Guides (LangGraph, CrewAI, AutoGen guides, X-Agent-ID header auto-tagging, agent: tag namespace)
- **v10.15.1** - Stale Venv Detection for Moved/Renamed Projects (auto-recreate venv when pip shebang interpreter path is missing)
- **v10.15.0** - Config Validation & Safe Environment Parsing (`validate_config()` at startup, `safe_get_int_env()`, 8 new robustness tests)
- **v10.14.0** - `conversation_id` Support for Incremental Conversation Saves (semantic dedup bypass, metadata storage, all backends)
- **v10.13.2** - Consolidation & Hybrid Storage Bug Fixes (missing StorageProtocol proxy methods, timezone-aware datetime, contributed by @VibeCodeChef)
- **v10.13.1** - Critical Bug Fixes (tag search limits, REST API field access, metadata corruption, hash display, prompt handler crashes)
- **v10.13.0** - Test Suite Stability (100% pass rate, 1,161 passing tests, authentication testing patterns)
- **v10.12.1** - Custom Memory Type Configuration Test Fixes (test isolation, environment cleanup)
- **v10.12.0** - Configurable Memory Type Ontology (75 types supporting PM and knowledge work, custom type configuration)
- **v10.11.2** - Tag Filtering & Security Hardening (DoS protection, SQL-level optimization, comprehensive tests)
- **v10.11.1** - MCP Prompt Handlers Fix (all 5 prompt handlers working, 100% success rate restored)
- **v10.11.0** - SQLite Integrity Monitoring (automatic corruption detection/repair, 3.5ms overhead, emergency export)
- **v10.10.6** - Test Infrastructure Improvements (Python 3.11 compatibility, pytest-benchmark, coverage baseline)
- **v10.10.5** - Embedding Dimension Cache Fix (dimension mismatch prevention, cache consistency)
- **v10.10.4** - CLI Batch Ingestion Fix (async bug causing "NoneType" errors, 100% success rate restored)
- **v10.10.3** - Test Infrastructure & Memory Scoring Fixes (graph validation, test authentication, score capping)
- **v10.10.2** - Memory Injection Filtering (minRelevanceScore enforcement, project-affinity filter, security hardening)
- **v10.10.1** - Search Handler Fix, Import Error Fix, Security Enhancement, Improved Exact Search
- **v10.10.0** - Environment Configuration Viewer (11 categorized parameters, sensitive masking, Settings Panel integration)
- **v10.9.0** - Batched Inference Performance (4-16x GPU speedup, 2.3-2.5x CPU speedup with adaptive GPU dispatch)
- **v10.9.0** - Batched Inference Performance (4-16x GPU speedup, 2.3-2.5x CPU speedup with adaptive GPU dispatch)
- **v10.8.0** - Hybrid BM25 + Vector Search (combines keyword matching with semantic search, solves exact match problem)
- **v10.7.2** - Server Management Button Fix (Settings modal buttons causing page reload)
- **v10.7.1** - Dashboard API Authentication Fix (complete auth coverage for all endpoints)
- **v10.7.0** - Backup UI Enhancements (View Backups modal, backup directory display, enhanced API)
- **v10.6.1** - Dashboard SSE Authentication Fix (EventSource API compatibility with query params)
- **v10.6.0** - Server Management Dashboard: Complete server administration from Dashboard Settings
- **v10.5.1** - Test Environment Safety: 4 critical scripts to prevent production database testing
- **v10.5.0** - Dashboard Authentication UI: Graceful user experience (authentication modal, API key/OAuth flows)
- **v10.4.6** - Documentation Enhancement: HTTP dashboard authentication requirements clarified (authentication setup examples)
- **v10.4.5** - Unified CLI Interface: `memory server --http` flag (easier UX, single command)
- **v10.4.4** - CRITICAL Security Fix: Timing attack vulnerability in API key comparison (CWE-208) + API Key Auth without OAuth
- **v10.4.2** - Docker Container Startup Fix (ModuleNotFoundError: aiosqlite)
- **v10.4.1** - Bug Fix: Time Expression Parsing (natural language time expressions fixed)
- **v10.4.0** - Memory Hook Quality Improvements (semantic deduplication, tag normalization, budget optimization)
- **v10.2.1** - MCP Client Compatibility & Delete Operations Fixes (integer enum fix, method name corrections)
- **v10.2.0** - External Embedding API Support (vLLM, Ollama, TEI, OpenAI integration)
- **v10.1.2** - Windows PowerShell 7+ Service Management Fix (SSL compatibility for manage_service.ps1)
- **v10.1.1** - Dependency & Windows Compatibility Fixes (requests dependency, PowerShell 7+ SSL support)
- **v10.1.0** - Python 3.14 Support (Extended compatibility to 3.10-3.14, tokenizers upgrade)
- **v10.0.3** - CRITICAL FIX: Backup Scheduler Now Works (2 critical bugs fixed, FastAPI lifespan integration)
- **v10.0.2** - Tool List Cleanup (Only 12 unified tools visible, 64% tool reduction complete)
- **v10.0.1** - CRITICAL HOTFIX: MCP tools loading restored (Python boolean fix)
- **v10.0.0** - ⚠️ BROKEN: Major API Redesign (64% Tool Consolidation) - Tools failed to load, use v10.0.2 instead
- **v9.3.1** - Critical shutdown bug fix (SIGTERM/SIGINT handling, clean server termination)
- **v9.3.0** - Relationship Inference Engine (Intelligent association typing, multi-factor analysis, confidence scoring)
- **v9.2.1** - Critical Knowledge Graph bug fix (MigrationRunner, 37 test fixes, idempotent migrations)
- **v9.2.0** - Knowledge Graph Dashboard with D3.js v7.9.0 (Interactive force-directed visualization, 6 typed relationships, 7-language support)
- **v9.0.6** - OAuth Persistent Storage Backend (SQLite-based for multi-worker deployments, <10ms token operations)
- **v9.0.5** - CRITICAL HOTFIX: OAuth 2.1 token endpoint routing bug fixed (HTTP 422 errors eliminated)
- **v9.0.4** - OAuth validation blocking server startup fixed (OAUTH_ENABLED default changed to False, validation made non-fatal)
- **v9.0.2** - Critical hotfix: Includes actual code fix for mass deletion bug (confirm_count parameter now REQUIRED)
- **v9.0.1** - Incorrectly tagged release (⚠️ Does NOT contain fix - use v9.0.2 instead)
- **v9.0.0** - Phase 0 Ontology Foundation (⚠️ Contains critical bug - upgrade to v9.0.2 immediately)
- **v8.76.0** - Official Lite Distribution (90% size reduction: 7.7GB → 805MB, dual publishing workflow)
- **v8.75.1** - Hook Installer Fix (flexible MCP server naming support, custom configurations)
- **v8.75.0** - Lightweight ONNX Quality Scoring (90% installation size reduction: 7.7GB → 805MB, same quality scoring performance)
- **v8.74.0** - Cross-Platform Orphan Process Cleanup (database lock prevention, automatic orphan detection after crashes)
- **v8.73.0** - Universal Permission Request Hook (auto-approves safe operations, eliminates repetitive prompts for 12+ tools)
- **v8.72.0** - Graph Traversal MCP Tools (find_connected_memories, find_shortest_path, get_memory_subgraph - 5-25ms, 30x faster)
- **v8.71.0** - Memory Management APIs and Graceful Shutdown (cache cleanup, process monitoring, production-ready memory management)
- **v8.70.0** - User Override Commands for Memory Hooks (`#skip`/`#remember` for manual memory control)
- **v8.69.0** - MCP Tool Annotations for Improved LLM Decision-Making (readOnlyHint/destructiveHint, auto-approval for 12 tools)
- **v8.68.2** - Platform Detection Improvements (Apple Silicon MPS support, 3-5x faster, comprehensive hardware detection)
- **v8.68.1** - Critical Data Integrity Bug Fix - Hybrid Backend (ghost memories fixed, 5 method fixes)
- **v8.68.0** - Update & Restart Automation (87% time reduction, <2 min workflow, cross-platform scripts)
- **v8.62.13** - HTTP-MCP Bridge API Endpoint Fix (Remote deployments restored with POST endpoints)
- **v8.62.12** - Quality Analytics UI Fixed ("Invalid Date" and "ID: undefined" bugs)
- **v8.62.10** - Document Ingestion Bug Fixed (NameError in web console, circular import prevention)
- **v8.62.8** - Environment Configuration Loading Bug Fixed (.env discovery, python-dotenv dependency)
- **v8.62.7** - Windows SessionStart Hook Bug Fixed in Claude Code 2.0.76+ (no more Windows hanging)
- **v8.62.6** - CRITICAL PRODUCTION HOTFIX: SQLite Pragmas Container Restart Bug (database locking errors after container restarts)
- **v8.62.5** - Test Suite Stability: 40 Tests Fixed (99% pass rate, 68% → 99% improvement)
- **v8.62.4** - CRITICAL BUGFIX: SQLite-Vec KNN Syntax Error (semantic search completely broken on sqlite-vec/hybrid backends)
- **v8.62.3** - CRITICAL BUGFIX: Memory Recall Handler Import Error (time_parser import path correction)
- **v8.62.2** - Test Infrastructure Improvements (5 test failures resolved, consolidation & performance suite stability)
- **v8.62.1** - Critical Bug Fix: SessionEnd Hook Real Conversation Data (hardcoded mock data fix, robust transcript parsing)
- **v8.62.0** - Comprehensive Test Coverage Infrastructure - 100% Handler Coverage Achievement (35 tests, 800+ lines, CI/CD gate)
- **v8.61.2** - CRITICAL HOTFIX: delete_memory KeyError Fix (response parsing, validated delete flow)
- **v8.61.1** - CRITICAL HOTFIX: Import Error Fix (5 MCP tools restored, relative import path correction)
- **v8.60.0** - Health Check Strategy Pattern Refactoring - Phase 3.1 (78% complexity reduction, Strategy pattern)
- **v8.59.0** - Server Architecture Refactoring - Phase 2 (40% code reduction, 29 handlers extracted, 5 specialized modules)
- **v8.58.0** - Test Infrastructure Stabilization - 100% Pass Rate Achievement (81.6% → 100%, 52 tests fixed)
- **v8.57.1** - Hotfix: Python -m Execution Support for CI/CD (server/__main__.py, --version/--help flags)
- **v8.57.0** - Test Infrastructure Improvements - Major Stability Boost (+6% pass rate, 32 tests fixed)
- **v8.56.0** - Server Architecture Refactoring - Phase 1 (4 focused modules, -7% lines, backward compatible)
- **v8.55.0** - AI-Optimized MCP Tool Descriptions (30-50% reduction in incorrect tool selection)
- **v8.54.4** - Critical MCP Tool Bugfix (check_database_health method call correction)
- **v8.54.3** - Chunked Storage Error Reporting Fix (accurate failure messages, partial success tracking)
- **v8.54.2** - Offline Mode Fix (opt-in offline mode, first-time install support)
- **v8.54.1** - UV Virtual Environment Support (installer compatibility fix)
- **v8.54.0** - Smart Auto-Capture System (intelligent pattern detection, 6 memory types, bilingual support)
- **v8.53.0** - Windows Service Management (Task Scheduler support, auto-startup, watchdog monitoring, 819 lines PowerShell automation)
- **v8.52.2** - Hybrid Backend Maintenance Enhancement (multi-PC association cleanup, drift prevention, Vectorize error handling)
- **v8.52.1** - Windows Embedding Fallback & Script Portability (DLL init failure fix, MCP_HTTP_PORT support)
- **v8.52.0** - Time-of-Day Emoji Icons (8 time-segment indicators, dark mode support, automatic timezone)
- **v8.51.0** - Graph Database Architecture (30x query performance, 97% storage reduction for associations)
- **v8.50.1** - Critical Bug Fixes (MCP_EMBEDDING_MODEL fix, installation script backend support, i18n quality analytics complete)
- **v8.50.0** - Fallback Quality Scoring (DeBERTa + MS-MARCO hybrid, technical content rescue, 20/20 tests passing)
- **v8.49.0** - DeBERTa Quality Classifier (absolute quality assessment, eliminates self-matching bias)
- **v8.48.4** - Cloudflare D1 Drift Detection Performance (10-100x faster queries, numeric comparison fix)
- **v8.48.3** - Code Execution Hook Fix - 75% token reduction now working (fixed time_filter parameter, Python warnings, venv detection)
- **v8.48.2** - HTTP Server Auto-Start & Time Parser Improvements (smart service management, "last N periods" support)
- **v8.48.1** - Critical Hotfix - Startup Failure Fix (redundant calendar import removed, immediate upgrade required)
- **v8.48.0** - CSV-Based Metadata Compression (78% size reduction, 100% sync success, metadata validation)
- **v8.47.1** - ONNX Quality Evaluation Bug Fixes (self-match fix, association pollution, sync queue overflow, realistic distribution)
- **v8.47.0** - Association-Based Quality Boost (connection-based enhancement, network effect leverage, metadata persistence)
- **v8.46.3** - Quality Score Persistence Fix (ONNX scores in hybrid backend, metadata normalization)
- **v8.46.2** - Session-Start Hook Crash Fix + Hook Installer Improvements (client-side tag filtering, isolated version metadata)
- **v8.46.1** - Windows Hooks Installer Fix + Quality System Integration (UTF-8 console configuration, backend quality scoring)
- **v8.45.3** - ONNX Ranker Model Export Fix (automatic model export, offline mode support, 7-16ms CPU performance)
- **v8.45.2** - Dashboard Dark Mode Consistency Fixes (global CSS overrides, Chart.js dark mode support)
- **v8.45.1** - Quality System Test Infrastructure Fixes (HTTP API router, storage retrieval, async test client)
- **v8.45.0** - Memory Quality System - AI-Driven Automatic Quality Scoring (ONNX-powered local SLM, multi-tier fallback, quality-based retention)
- **v8.44.0** - Multi-Language Expansion (Japanese, Korean, German, French, Spanish - 359 keys each, complete i18n coverage)
- **v8.43.0** - Internationalization & Quality Automation (English/Chinese i18n, Claude branch automation, quality gates)
- **v8.42.1** - MCP Resource Handler Fix (`AttributeError` with Pydantic AnyUrl objects)
- **v8.42.0** - Memory Awareness Enhancements (visible memory injection, quality session summaries, LLM-powered summarization)
- **v8.41.2** - Hook Installer Utility File Deployment (ALL 14 utilities copied, future-proof glob pattern)
- **v8.41.1** - Context Formatter Memory Sorting (recency sorting within categories, newest first)
- **v8.41.0** - Session Start Hook Reliability Improvements (error suppression, clean output, memory filtering, classification fixes)
- **v8.40.0** - Session Start Version Display (automatic version comparison, PyPI status labels)
- **v8.39.1** - Dashboard Analytics Bug Fixes: Three critical fixes (top tags filtering, recent activity display, storage report fields)
- **v8.39.0** - Performance Optimization: Storage-layer date-range filtering (10x faster analytics, 97% data transfer reduction)
- **v8.38.1** - Critical Hotfix: HTTP MCP JSON-RPC 2.0 compliance fix (Claude Code/Desktop connection failures resolved)
- **v8.38.0** - Code Quality: Phase 2b COMPLETE (~176-186 lines duplicate code eliminated, 10 consolidations)
- **v8.37.0** - Code Quality: Phase 2a COMPLETE (5 duplicate high-complexity functions eliminated)
- **v8.36.1** - Critical Hotfix: HTTP server startup crash fix (forward reference error in analytics.py)
- **v8.36.0** - Code Quality: Phase 2 COMPLETE (100% of target achieved, -39 complexity points)
- **v8.35.0** - Code Quality: Phase 2 Batch 1 (install.py, cloudflare.py, -15 complexity points)
- **v8.34.0** - Code Quality: Phase 2 Complexity Reduction (analytics.py refactored, 11 → 6-7 complexity)
- **v8.33.0** - Critical Installation Bug Fix + Code Quality Improvements (dead code cleanup, automatic MCP setup)
- **v8.32.0** - Code Quality Excellence: pyscn Static Analysis Integration (multi-layer QA workflow)
- **v8.31.0** - Revolutionary Batch Update Performance (21,428x faster memory consolidation)
- **v8.30.0** - Analytics Intelligence: Adaptive Charts & Critical Data Fixes (accurate trend visualization)
- **v8.28.1** - Critical HTTP MCP Transport JSON-RPC 2.0 Compliance Fix (Claude Code compatibility)
- **v8.28.0** - Cloudflare AND/OR Tag Filtering (unified search API, 3-5x faster hybrid sync)
- **v8.27.1** - Critical Hotfix: Timestamp Regression (created_at preservation during metadata sync)
- **v8.26.0** - Revolutionary MCP Performance (534,628x faster tools, 90%+ cache hit rate)
- **v8.25.0** - Hybrid Backend Drift Detection (automatic metadata sync, bidirectional awareness)
- **v8.24.4** - Code Quality Improvements from Gemini Code Assist (regex sanitization, DOM caching)
- **v8.24.3** - Test Coverage & Release Agent Improvements (tag+time filtering tests, version history fix)
- **v8.24.2** - CI/CD Workflow Fixes (bash errexit handling, exit code capture)
- **v8.24.1** - Test Infrastructure Improvements (27 test failures resolved, 63% → 71% pass rate)
- **v8.24.0** - PyPI Publishing Enabled (automated package publishing via GitHub Actions)
- **v8.23.1** - Stale Virtual Environment Prevention System (6-layer developer protection)
- **v8.23.0** - Consolidation Scheduler via Code Execution API (88% token reduction)
**📖 Full Details**: [CHANGELOG.md](CHANGELOG.md) | [All Releases](https://github.com/doobidoo/mcp-memory-service/releases)
---
## Migration to v9.0.0
**⚡ TL;DR**: No manual migration needed - upgrades happen automatically!
**Breaking Changes:**
- **Memory Type Ontology**: Legacy types auto-migrate to new taxonomy (task→observation, note→observation)
- **Asymmetric Relationships**: Directed edges only (no longer bidirectional)
**Migration Process:**
1. Stop your MCP server
2. Update to latest version (`git pull` or `pip install --upgrade mcp-memory-service`)
3. Restart server - automatic migrations run on startup:
- Database schema migrations (009, 010)
- Memory type soft-validation (legacy types → observation)
- No tag migration needed (backward compatible)
**Safety**: Migrations are idempotent and safe to re-run
---
### Breaking Changes
#### 1. Memory Type Ontology
**What Changed:**
- Legacy memory types (task, note, standard) are deprecated
- New formal taxonomy: 5 base types (observation, decision, learning, error, pattern) with 21 subtypes
- Type validation now defaults to 'observation' for invalid types (soft validation)
**Migration Process:**
✅ **Automatic** - No manual action required!
When you restart the server with v9.0.0:
- Invalid memory types are automatically soft-validated to 'observation'
- Database schema updates run automatically
- Existing memories continue to work without modification
**New Memory Types:**
- observation: General observations, facts, and discoveries
- decision: Decisions and planning
- learning: Learnings and insights
- error: Errors and failures
- pattern: Patterns and trends
**Backward Compatibility:**
- Existing memories will be auto-migrated (task→observation, note→observation, standard→observation)
- Invalid types default to 'observation' (no errors thrown)
#### 2. Asymmetric Relationships
**What Changed:**
- Asymmetric relationships (causes, fixes, supports, follows) now store only directed edges
- Symmetric relationships (related, contradicts) continue storing bidirectional edges
- Database migration (010) removes incorrect reverse edges
**Migration Required:**
No action needed - database migration runs automatically on startup.
**Code Changes Required:**
If your code expects bidirectional storage for asymmetric relationships:
```python
# OLD (will no longer work):
# Asymmetric relationships were stored bidirectionally
result = storage.find_connected(memory_id, relationship_type="causes")
# NEW (correct approach):
# Use direction parameter for asymmetric relationships
result = storage.find_connected(
memory_id,
relationship_type="causes",
direction="both" # Explicit direction required for asymmetric types
)
```
**Relationship Types:**
- Asymmetric: causes, fixes, supports, follows (A→B ≠ B→A)
- Symmetric: related, contradicts (A↔B)
### Performance Improvements
- ontology validation: 97.5x faster (module-level caching)
- Type lookups: 35.9x faster (cached reverse maps)
- Tag validation: 47.3% faster (eliminated double parsing)
### Testing
- 829/914 tests passing (90.7%)
- 80 new ontology tests with 100% backward compatibility
- All API/HTTP integration tests passing
### Support
If you encounter issues during migration:
- Check [Troubleshooting Guide](docs/troubleshooting/)
- Review [CHANGELOG.md](CHANGELOG.md) for detailed changes
- Open an issue: https://github.com/doobidoo/mcp-memory-service/issues
---
## 📚 Documentation & Resources
- **[Agent Integration Guides](docs/agents/)** 🆕 – LangGraph, CrewAI, AutoGen, HTTP generic
- **[Installation Guide](docs/installation.md)** – Detailed setup instructions
- **[Configuration Guide](docs/mastery/configuration-guide.md)** – Backend options and customization
- **[Architecture Overview](docs/architecture.md)** – How it works under the hood
- **[Team Setup Guide](docs/teams.md)** – OAuth and cloud collaboration
- **[Knowledge Graph Dashboard](docs/features/knowledge-graph-dashboard.md)** 🆕 – Interactive graph visualization guide
- **[Troubleshooting](docs/troubleshooting/)** – Common issues and solutions
- **[API Reference](docs/api.md)** – Programmatic usage
- **[Wiki](https://github.com/doobidoo/mcp-memory-service/wiki)** – Complete documentation
- [](https://deepwiki.com/doobidoo/mcp-memory-service) – AI-powered documentation assistant
---
## 🤝 Contributing
We welcome contributions! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
**Quick Development Setup:**
```bash
git clone https://github.com/doobidoo/mcp-memory-service.git
cd mcp-memory-service
pip install -e . # Editable install
pytest tests/ # Run test suite
```
---
## ⭐ Support
If this saves you time, give us a star! ⭐
- **Issues**: [GitHub Issues](https://github.com/doobidoo/mcp-memory-service/issues)
- **Discussions**: [GitHub Discussions](https://github.com/doobidoo/mcp-memory-service/discussions)
- **Wiki**: [Documentation Wiki](https://github.com/doobidoo/mcp-memory-service/wiki)
---
## 📄 License
Apache 2.0 – See [LICENSE](LICENSE) for details.
---
<p align="center">
<strong>Never explain your project to AI twice.</strong><br/>
Start using MCP Memory Service today.
</p>
## ⚠️ v6.17.0+ Script Migration Notice
**Updating from an older version?** Scripts have been reorganized for better maintainability:
- **Recommended**: Use `python -m mcp_memory_service.server` in your Claude Desktop config (no path dependencies!)
- **Alternative 1**: Use `uv run memory server` with UV tooling
- **Alternative 2**: Update path from `scripts/run_memory_server.py` to `scripts/server/run_memory_server.py`
- **Backward compatible**: Old path still works with a migration notice
## ⚠️ First-Time Setup Expectations
On your first run, you'll see some warnings that are **completely normal**:
- **"WARNING: Failed to load from cache: No snapshots directory"** - The service is checking for cached models (first-time setup)
- **"WARNING: Using TRANSFORMERS_CACHE is deprecated"** - Informational warning, doesn't affect functionality
- **Model download in progress** - The service automatically downloads a ~25MB embedding model (takes 1-2 minutes)
These warnings disappear after the first successful run. The service is working correctly! For details, see our [First-Time Setup Guide](docs/first-time-setup.md).
### 🐍 Python 3.13 Compatibility Note
**sqlite-vec** may not have pre-built wheels for Python 3.13 yet. If installation fails:
- The installer will automatically try multiple installation methods
- Consider using Python 3.12 for the smoothest experience: `brew install python@3.12`
- Alternative: Use Cloudflare backend with `--storage-backend cloudflare`
- See [Troubleshooting Guide](docs/troubleshooting/general.md#python-313-sqlite-vec-issues) for details
### 🍎 macOS SQLite Extension Support
**macOS users** may encounter `enable_load_extension` errors with sqlite-vec:
- **System Python** on macOS lacks SQLite extension support by default
- **Solution**: Use Homebrew Python: `brew install python && rehash`
- **Alternative**: Use pyenv: `PYTHON_CONFIGURE_OPTS='--enable-loadable-sqlite-extensions' pyenv install 3.12.0`
- **Fallback**: Use Cloudflare or Hybrid backend: `--storage-backend cloudflare` or `--storage-backend hybrid`
- See [Troubleshooting Guide](docs/troubleshooting/general.md#macos-sqlite-extension-issues) for details
## 🎯 Memory Awareness in Action
**Intelligent Context Injection** - See how the memory service automatically surfaces relevant information at session start:
<img src="docs/assets/images/memory-awareness-hooks-example.png" alt="Memory Awareness Hooks in Action" width="100%" />
**What you're seeing:**
- 🧠 **Automatic memory injection** - 8 relevant memories found from 2,526 total
- 📂 **Smart categorization** - Recent Work, Current Problems, Additional Context
- 📊 **Git-aware analysis** - Recent commits and keywords automatically extracted
- 🎯 **Relevance scoring** - Top memories scored at 100% (today), 89% (8d ago), 84% (today)
- ⚡ **Fast retrieval** - SQLite-vec backend with 5ms read performance
- 🔄 **Background sync** - Hybrid backend syncing to Cloudflare
**Result**: Claude starts every session with full project context - no manual prompting needed.
## 📚 Complete Documentation
**👉 Visit our comprehensive [Wiki](https://github.com/doobidoo/mcp-memory-service/wiki) for detailed guides:**
### 🧠 v7.1.3 Natural Memory Triggers (Latest)
- **[Natural Memory Triggers v7.1.3 Guide](https://github.com/doobidoo/mcp-memory-service/wiki/Natural-Memory-Triggers-v7.1.0)** - Intelligent automatic memory awareness
- ✅ **85%+ trigger accuracy** with semantic pattern detection
- ✅ **Multi-tier performance** (50ms instant → 150ms fast → 500ms intensive)
- ✅ **CLI management system** for real-time configuration
- ✅ **Git-aware context** integration for enhanced relevance
- ✅ **Zero-restart installation** with dynamic hook loading
### 🆕 v7.0.0 OAuth & Team Collaboration
- **[🔐 OAuth 2.1 Setup Guide](https://github.com/doobidoo/mcp-memory-service/wiki/OAuth-2.1-Setup-Guide)** - **NEW!** Complete OAuth 2.1 Dynamic Client Registration guide
- **[🔗 Integration Guide](https://github.com/doobidoo/mcp-memory-service/wiki/03-Integration-Guide)** - Claude Desktop, **Claude Code HTTP transport**, VS Code, and more
- **[🛡️ Advanced Configuration](https://github.com/doobidoo/mcp-memory-service/wiki/04-Advanced-Configuration)** - **Updated!** OAuth security, enterprise features
### 🧬 v8.23.0+ Memory Consolidation
- **[📊 Memory Consolidation System Guide](https://github.com/doobidoo/mcp-memory-service/wiki/Memory-Consolidation-System-Guide)** - **NEW!** Automated memory maintenance with real-world performance metrics
- ✅ **Dream-inspired consolidation** (decay scoring, association discovery, compression, archival)
- ✅ **24/7 automatic scheduling** (daily/weekly/monthly via HTTP server)
- ✅ **Token-efficient Code Execution API** (90% token reduction vs MCP tools)
- ✅ **Real-world performance data** (4-6 min for 2,495 memories with hybrid backend)
- ✅ **Three manual trigger methods** (HTTP API, MCP tools, Python API)
### 🚀 Setup & Installation
- **[📋 Installation Guide](https://github.com/doobidoo/mcp-memory-service/wiki/01-Installation-Guide)** - Complete installation for all platforms and use cases
- **[🖥️ Platform Setup Guide](https://github.com/doobidoo/mcp-memory-service/wiki/02-Platform-Setup-Guide)** - Windows, macOS, and Linux optimizations
- **[⚡ Performance Optimization](https://github.com/doobidoo/mcp-memory-service/wiki/05-Performance-Optimization)** - Speed up queries, optimize resources, scaling
### 🧠 Advanced Topics
- **[👨💻 Development Reference](https://github.com/doobidoo/mcp-memory-service/wiki/06-Development-Reference)** - Claude Code hooks, API reference, debugging
- **[🔧 Troubleshooting Guide](https://github.com/doobidoo/mcp-memory-service/wiki/07-TROUBLESHOOTING)** - **Updated!** OAuth troubleshooting + common issues
- **[❓ FAQ](https://github.com/doobidoo/mcp-memory-service/wiki/08-FAQ)** - Frequently asked questions
- **[📝 Examples](https://github. | text/markdown | null | Heinrich Krupp <heinrich.krupp@gmail.com> | null | null | Apache-2.0 | agent-memory, agentic-ai, ai-agents, ai-assistant, autogen, claude-desktop, cloudflare, crewai, fastapi, knowledge-graph, langgraph, long-term-memory, mcp, memory-consolidation, model-context-protocol, multi-agent, open-source, privacy-first, rag, rest-api, self-hosted, semantic-memory, semantic-search, sqlite-vec, vector-database | [
"Development Status :: 5 - Production/Stable",
"Environment :: Console",
"Framework :: FastAPI",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10... | [] | null | null | >=3.10 | [] | [] | [] | [
"aiofiles>=23.2.1",
"aiohttp>=3.8.0",
"aiosqlite>=0.20.0",
"apscheduler>=3.11.0",
"authlib>=1.2.0",
"build>=0.10.0",
"chardet>=5.0.0",
"click>=8.0.0",
"fastapi>=0.115.0",
"httpx>=0.24.0",
"mcp<2.0.0,>=1.8.0",
"psutil>=5.9.0",
"pypdf2>=3.0.0",
"python-dotenv>=1.0.0",
"python-jose[cryptogr... | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T05:42:10.471900 | mcp_memory_service-10.17.2.tar.gz | 5,575,927 | b6/e9/cf297d3b51982266c802578cecfc969b4b58f0c543b3094b80f8cbd53b7e/mcp_memory_service-10.17.2.tar.gz | source | sdist | null | false | 9593ecc07f37bdf596d6310319896f6b | 1747522fb55f0eeddb19b1932ba2a5de4f8e360338977c699e2a902fac169673 | b6e9cf297d3b51982266c802578cecfc969b4b58f0c543b3094b80f8cbd53b7e | null | [
"LICENSE",
"NOTICE"
] | 317 |
2.4 | enrobie | 0.14.9 | Enasis Network Chatting Robie | # Enasis Network Chatting Robie
> This project has not released its first major version.
Barebones service for connecting to multiple upstream chat networks.
<a href="https://pypi.org/project/enrobie"><img src="https://enasisnetwork.github.io/enrobie/badges/pypi.png"></a><br>
<a href="https://enasisnetwork.github.io/enrobie/validate/flake8.txt"><img src="https://enasisnetwork.github.io/enrobie/badges/flake8.png"></a><br>
<a href="https://enasisnetwork.github.io/enrobie/validate/pylint.txt"><img src="https://enasisnetwork.github.io/enrobie/badges/pylint.png"></a><br>
<a href="https://enasisnetwork.github.io/enrobie/validate/ruff.txt"><img src="https://enasisnetwork.github.io/enrobie/badges/ruff.png"></a><br>
<a href="https://enasisnetwork.github.io/enrobie/validate/mypy.txt"><img src="https://enasisnetwork.github.io/enrobie/badges/mypy.png"></a><br>
<a href="https://enasisnetwork.github.io/enrobie/validate/yamllint.txt"><img src="https://enasisnetwork.github.io/enrobie/badges/yamllint.png"></a><br>
<a href="https://enasisnetwork.github.io/enrobie/validate/pytest.txt"><img src="https://enasisnetwork.github.io/enrobie/badges/pytest.png"></a><br>
<a href="https://enasisnetwork.github.io/enrobie/validate/coverage.txt"><img src="https://enasisnetwork.github.io/enrobie/badges/coverage.png"></a><br>
<a href="https://enasisnetwork.github.io/enrobie/validate/sphinx.txt"><img src="https://enasisnetwork.github.io/enrobie/badges/sphinx.png"></a><br>
## Documentation
Read [project documentation](https://enasisnetwork.github.io/enrobie/sphinx)
built using the [Sphinx](https://www.sphinx-doc.org/) project.
Should you venture into the sections below you will be able to use the
`sphinx` recipe to build documention in the `sphinx/html` directory.
## Installing the package
Installing stable from the PyPi repository
```
pip install enrobie
```
Installing latest from GitHub repository
```
pip install git+https://github.com/enasisnetwork/enrobie
```
## Running the service
There are several command line arguments, see them all here.
```
python -m enrobie.execution.service --help
```
Here is an example of running the service from inside the project folder
within the [Workspace](https://github.com/enasisnetwork/workspace) project.
```
python -m enrobie.execution.service \
--config ../../Persistent/enrobie-prod.yml \
--console \
--debug \
--print_command
```
Replace `../../Persistent/enrobie-prod.yml` with your configuration file.
## Using the Ainswer plugin
These dependencies are not automatically installed but are required when
using the new `AinswerPlugin`. Install the following when using that.
- `pydantic-ai-slim`
- `pydantic-ai-slim[anthropic]`
- `pydantic-ai-slim[openai]`
## Deploying the service
It is possible to deploy the project with the Ansible roles located within
the [Orchestro](https://github.com/enasisnetwork/orchestro) project! Below
is an example of what you might run from that project to deploy this one.
However there is a bit to consider here as this requires some configuration.
```
make -s \
stage=prod limit=all \
ansible_args=" --diff" \
enrobie-install
```
Or you may use the Ansible collection directly!
[GitHub](https://github.com/enasisnetwork/ansible-projects),
[Galaxy](https://galaxy.ansible.com/ui/repo/published/enasisnetwork/projects)
## Quick start for local development
Start by cloning the repository to your local machine.
```
git clone https://github.com/enasisnetwork/enrobie.git
```
Set up the Python virtual environments expected by the Makefile.
```
make -s venv-create
```
### Execute the linters and tests
The comprehensive approach is to use the `check` recipe. This will stop on
any failure that is encountered.
```
make -s check
```
However you can run the linters in a non-blocking mode.
```
make -s linters-pass
```
And finally run the various tests to validate the code and produce coverage
information found in the `htmlcov` folder in the root of the project.
```
make -s pytest
```
## Version management
> :warning: Ensure that no changes are pending.
1. Rebuild the environment.
```
make -s check-revenv
```
1. Update the [version.txt](enrobie/version.txt) file.
1. Push to the `main` branch.
1. Create [repository](https://github.com/enasisnetwork/enrobie) release.
1. Build the Python package.<br>Be sure no uncommited files in tree.
```
make -s pypackage
```
1. Upload Python package to PyPi test.
```
make -s pypi-upload-test
```
1. Upload Python package to PyPi prod.
```
make -s pypi-upload-prod
```
| text/markdown | null | null | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"encommon>=0.22.11",
"enconnect>=0.17.18",
"enhomie>=0.13.10"
] | [] | [] | [] | [
"Source, https://github.com/enasisnetwork/enrobie",
"Documentation, https://enasisnetwork.github.io/enrobie/sphinx"
] | twine/6.2.0 CPython/3.12.6 | 2026-02-21T05:41:51.303788 | enrobie-0.14.9.tar.gz | 93,327 | f1/b8/ab5a1acf67bccf5bf1df187dfba7804230c466fd8de88d090fa6c09ca67d/enrobie-0.14.9.tar.gz | source | sdist | null | false | 7ba033ce658479905fc0601134ab16a6 | 273195124eb039433c7c3e9c7b6fcf3c4e343029af19cc199927bb2cf252583f | f1b8ab5a1acf67bccf5bf1df187dfba7804230c466fd8de88d090fa6c09ca67d | null | [
"LICENSE"
] | 220 |
2.4 | mqtt-entity | 1.1.1 | MQTT client supporting Home Assistant MQTT entity auto-discovery | # MQTT Entity helper library for Home Assistant
[](https://github.com/kellerza/mqtt_entity/actions)
[](https://codecov.io/gh/kellerza/mqtt_entity)
A Python helper library to manage Home Assistant entities over MQTT.
Updated for device based MQTT discovery.
Features:
- MQTT client based on paho-mqtt
- Retrieve MQTT service info from the Home Assistant Supervisor
- Manage MQTT discovery info (adding/removing entities)
- MQTTDevice class to manage devices
- Availability management
- Manage entities per device
- Home Assistant Entities modelled as attrs classes:
- Read-only: Sensor, BinarySensor
- Read & write: Select, Switch, Number, Text, Light
- MQTT device events
- Asyncio based
- Helpers for Home Assistant add-ons (optional)
- Add-on configuration modeled as attrs classes
- Load from environment variables, HA's options.yaml or options.json
- Load MQTT connection settings from the Supervisor
- Enable add-on logging (incl colors & debug by config)
## Why?
This MQTT code was included in several of my home Assistant addons (SMA-EM / Sunsynk). It is easier to update a single library & add new features, like discovery removal.
Alternatives options (not based on asyncio)
- <https://pypi.org/project/ha-mqtt-discoverable/>
- <https://pypi.org/project/homeassistant-mqtt-binding/>
## Credits
@Ivan-L contributed some of the writable entities to the Sunsynk addon project
## Release
Semantic versioning is used for release.
To create a new release, include a commit with a :dolphin: emoji as a prefix in the commit message. This will trigger a release on the master branch.
```bash
# Patch
git commit -m ":dolphin: Release 0.0.x"
# Minor
git commit -m ":rocket: Release 0.x.0"
```
### Development
To run the tests, you need to have Python 3.12+ installed.
The `--mqtt` connects to a live Home Assistant instance using the MQTT broker.
```bash
uv run pytest --mqtt
```
| text/markdown | Johann Kellerman | Johann Kellerman <kellerza@gmail.com> | null | null | null | asyncio, discovery, home-assistant, library, mqtt | [
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiohttp<4,>3.12",
"cattrs<27,>=25",
"colorlog",
"paho-mqtt<3,>=2.1",
"pyyaml<7,>=6; extra == \"options\""
] | [] | [] | [] | [
"Homepage, https://github.com/kellerza/mqtt_entity"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T05:41:21.273491 | mqtt_entity-1.1.1.tar.gz | 16,852 | 29/94/3543f39ea1241d2a4b106d39b9600cf900dd5c36f95066cb90020ae8237e/mqtt_entity-1.1.1.tar.gz | source | sdist | null | false | db3f50011def05fbcb022327ac99cd41 | 8d7533a149d2540f6fef14f0184e8b4ade30545e37c8aab4f9da137e5abb692d | 29943543f39ea1241d2a4b106d39b9600cf900dd5c36f95066cb90020ae8237e | MIT | [
"LICENSE"
] | 230 |
2.4 | yta-editor-nodes | 0.3.0 | Youtube Autonomous Main Editor Nodes module | # Youtube Autonomous Main Editor Nodes module
The main Editor module related to nodes. | text/markdown | danialcala94 | danielalcalavalera@gmail.com | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9"
] | [] | null | null | ==3.9 | [] | [] | [] | [
"yta_validation<1.0.0,>=0.0.1",
"yta_programming<1.0.0,>=0.0.1",
"yta_math_easings<1.0.0,>=0.0.1",
"yta_editor_parameters<1.0.0,>=0.0.1",
"yta_editor_nodes_common<1.0.0,>=0.0.1",
"yta_editor_nodes_cpu<1.0.0,>=0.0.1",
"yta_editor_nodes_gpu<1.0.0,>=0.0.1",
"yta_editor_time<1.0.0,>=0.0.1"
] | [] | [] | [] | [] | poetry/2.2.0 CPython/3.9.0 Windows/10 | 2026-02-21T05:40:07.897393 | yta_editor_nodes-0.3.0.tar.gz | 18,268 | 82/b6/8f038ee1138d90f4c7571cb07e201711a610b0612a7c54fdb81dab82df9a/yta_editor_nodes-0.3.0.tar.gz | source | sdist | null | false | 32a5af7b13d94b1ba73d065243beeea4 | c379440ef17567c8b23e518310ad925ddbeb41dfcd5e24c3eac911b49bcfdf86 | 82b68f038ee1138d90f4c7571cb07e201711a610b0612a7c54fdb81dab82df9a | null | [] | 213 |
2.4 | venver | 0.5.3 | Automatically activate, and deactivate virutal environments when entering or leaving directories. | Venver
######
Automatically activate, and deactivate virutal environments when entering or
leaving directories.
Installation
############
.. code-block:: shell
curl -LsSf https://codeberg.org/narvin/venver/raw/branch/main/bin/install.sh | sh
Once finished, the installation script will print additional instructions to
configure your shell.
Usage
#####
Let's assume you have a python project with a virtual environment like this:
.. code-block::
~/
├─ myapp/
│ ├─ .venv/
│ │ ...
│ ├─ src/
│ │ ...
│ ├─ .gitignore
│ ├─ pyproject.toml
│ ├─ README.rst
Add a ``.venver`` file to the project root.
.. code-block:: shell
cd ~/myapp
echo './.venv' > .venver
.. code-block::
~/
├─ myapp/
│ ├─ .venv/
│ │ ...
│ ├─ src/
│ │ ...
│ ├─ .gitignore
│ ├─ .venver
│ ├─ pyproject.toml
│ ├─ README.rst
Navigating anywhere inside the ``myapp`` directory will result in the
environment at ``myapp/.venv`` being activated. And navigating outside of
``myapp`` will deactivate the environment.
.. code-block:: console
~ $ cd myapp
~/myapp (.venv) $ cd src
~/myapp/src (.venv) $ cd ~
~ $ cd myapp/src
~/myapp/src (.venv) $
If you manually deactivate the environment, it won't be automatically activated
again until you navigate outside of ``myapp``, then reenter it.
.. code-block:: console
~ $ cd myapp
~/myapp (.venv) $ deactivate
~/myapp $ cd src
~/myapp/src $ cd ~
~ $ cd myapp
~/myapp (.venv) $
You may specify an environment that isn't in the project directory. This is
useful if you have environments you want to reuse. The activation, and
deactivation will still be relative to the ``.venver`` directory, and not the
environment directory.
.. code-block::
~/
├─ venvs/
│ ├─ web-venv/
│ │ ...
│ ├─ console-venv/
│ │ ...
.. code-block:: shell
cd ~/myapp
echo '~/venvs/web-venv' > .venver
.. code-block:: console
~ $ cd myapp
~/myapp (web-venv) $ cd src
~/myapp/src (web-venv) $ cd ~
~ $ cd myapp/src
~/myapp/src (web-venv) $
| text/x-rst | Narvin Singh | Narvin Singh <Narvin.A.Singh@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: POSIX :: Linux"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"python-docs-theme~=2025.2; extra == \"doc\"",
"sphinx~=8.1; extra == \"doc\""
] | [] | [] | [] | [
"Homepage, https://venver.readthedocs.io",
"Documentation, https://venver.readthedocs.io",
"Repository, https://codeberg.org/venver",
"Issues, https://codeberg.org/venver/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T05:39:40.031676 | venver-0.5.3-py3-none-any.whl | 16,409 | 8e/38/c30437f5c05ed3fc74496ab9ac8214aa4de798670b85c3e07f352b70b594/venver-0.5.3-py3-none-any.whl | py3 | bdist_wheel | null | false | 83a91bb2c73fb2c6b3372ef81af0b159 | 8c7896de7843c0dc6035fe2510bbc573cc82458ed10841ffd140d4e2567d6bb4 | 8e38c30437f5c05ed3fc74496ab9ac8214aa4de798670b85c3e07f352b70b594 | LicenseRef-no-ai-ethical-license | [
"LICENSE"
] | 223 |
2.4 | seamless-pdf | 1.0.3 | Convert HTML, Markdown, and DOCX documents into continuous, single-page PDFs. | <div align="center">
# Seamless PDF
**Convert HTML, Markdown, and DOCX documents into continuous, single-page PDFs -- no page breaks.**
[](https://pypi.org/project/seamless-pdf/)
[](https://pepy.tech/project/seamless-pdf)
[]()
[](https://github.com/SleepyPandas/Document-to-ContinuousPDF/actions)
[](https://playwright.dev/)
[](LICENSE)
</div>
---
Standard PDF converters split your content across fixed-size pages. **Seamless PDF** renders the entire document onto a single continuous page perfectly sized to the content's width and height. Ideal for long-form reports, documentation snapshots, newsletters and any workflow where page breaks get in the way or you want to retain the original content viewing experience i.e it retains original table of contents.
---
## Features
| Feature | Description |
|---|---|
| **Single-Page Output** | One continuous PDF sized exactly to your content |
| **Multi-Format Input** | Supports `.html`, `.md`, `.markdown`, and `.docx` files |
| **CLI & Python API** | Use from the terminal or integrate directly into your code |
| **Markdown Rendering** | GitHub-flavored Markdown with syntax highlighting via Pygments |
| **Theme Selection** | Render output with `light` or `dark` theme via API/CLI |
| **Page Width Control (v1.0.0)** | Option to enforce a maximum page width (e.g., `800px`) |
| **Custom Margins (v1.0.0)** | Added `--margin-top`, `--margin-right`, `--margin-bottom`, and `--margin-left` arguments |
| **PDF Outlines / Bookmarks (v1.0.0)** | Automatically extracts headers (`<h1>` to `<h6>`) and maps them into native PDF bookmarks |
---
## Installation
```bash
pip install seamless-pdf
python -m playwright install chromium
playwright install
```
> [!IMPORTANT]
> Playwright uses a headless Chromium browser under the hood to render documents. The standard `pip install` does **not** download the browser binary automatically. For first-time installs or updates, you **must** download the Chromium browser by running `python -m playwright install chromium` followed by `playwright install`.
---
## Quick Start
### Command Line
```bash
seamless-pdf input.html -o output.pdf
seamless-pdf README.md -o README.pdf
seamless-pdf report.docx -o report.pdf
seamless-pdf README.md -o README-dark.pdf --theme dark
# Width and Margin control
seamless-pdf README.md -o README-custom.pdf --width 1000px --margin-top 50px --margin-bottom 50px
```
### Python API
```python
from seamless_pdf import convert
convert("input.html", "output.pdf")
convert("README.md", "readme.pdf")
convert("report.docx", "report.pdf")
# Theming, width, and margin overrides
convert(
"README.md",
"readme-custom.pdf",
theme="dark",
width="1000px",
margin_top="50px"
)
```
---
## Usage
The `convert` function automatically detects the input format from the file extension (`.html`, `.htm`, `.md`, `.markdown`, `.docx`). You can also specify the format explicitly:
```python
from seamless_pdf import convert
# Auto-detected as Markdown
convert("docs/notes.md", "notes.pdf")
# Explicit input type override
convert("docs/notes.txt", "notes.pdf", input_type="markdown")
# Optional render theme (light or dark)
convert("docs/notes.md", "notes-dark.pdf", theme="dark")
```
### CLI Options
```bash
# Explicit input type override
seamless-pdf docs/notes.txt -o notes.pdf --input-type markdown
# Dark theme rendering
seamless-pdf docs/notes.md -o notes-dark.pdf --theme dark
# Custom page width and margins
seamless-pdf docs/notes.md -o notes.pdf --width 800px --margin-left 20px --margin-right 20px
```
### Notes on Dark Mode
> [!NOTE]
> Dark mode behavior depends on the input type:
>
> - **Markdown / DOCX inputs**: Seamless PDF generates HTML and injects the selected theme styles. Using `--theme dark` is guaranteed to produce dark-themed output consistently.
> - **HTML inputs**: Seamless PDF respects the source HTML/CSS. If the source HTML is not dark-aware (no dark styles or `prefers-color-scheme`), dark output is **best effort** and cannot be guaranteed. Provide dark-ready HTML for the best results!
### Supported Input Types
| Extension | Type Keyword |
|---|---|
| `.html`, `.htm` | `html` |
| `.md`, `.markdown` | `markdown` |
| `.docx` | `docx` |
---
## Requirements
| Dependency | Version |
|---|---|
| Python | 3.10, 3.11, 3.12, 3.13 |
| Playwright (Chromium) | >= 1.40.0 |
| markdown | >= 3.10.1 |
| Pygments | >= 2.17.0 |
| pymdown-extensions | >= 10.0 |
| mammoth | >= 1.6.0 |
| pypdf | >= 3.17.0 |
---
## Roadmap
- [ ] PDF-to-PDF re-rendering (merge & reflow existing PDFs)
- [ ] Broader PDF manipulation toolset
---
## What's New in V1.0.1 / V1.0.0
- Fixed an issue where fractional pixel rounding in Chromium caused a blank second page to render for certain documents.
- Fixed `pypdf` not installing automatically with `pip install seamless-pdf`.
- Added **Page Width Control** via the `--width` CLI argument or API parameter to bound extremely wide documents.
- Added **Custom Margins / Padding** (`--margin-top`, `--margin-right`, `--margin-bottom`, `--margin-left`) to let text breathe.
- Added **PDF Outlines (Bookmarks)**. Seamless PDF now automatically parses headers (`<h1>` to `<h6>`) and injects them hierarchically into the final continuous PDF!
- Hardened unit tests, stabilized edge cases, and expanded CLI/API configuration consistency.
*see [changelog.md](CHANGELOG.md) for more*
---
## Cloning for your purposes...
```bash
git clone https://github.com/SleepyPandas/Document-to-ContinuousPDF.git
cd Document-to-ContinuousPDF
pip install -e ".[dev]"
pytest
```
---
## License
This project is licensed under the **MIT License** -- see the [LICENSE](LICENSE) file for details.
| text/markdown | null | Anthony Hua <tommyrobotics1@gmail.com> | null | null | MIT License
Copyright (c) 2026 SleepyPandas
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
| pdf, converter, continuous, single-page, html, markdown, docx | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Utili... | [] | null | null | >=3.10 | [] | [] | [] | [
"playwright>=1.40.0",
"markdown>=3.10.1",
"Pygments>=2.17.0",
"pymdown-extensions>=10.0",
"mammoth>=1.6.0",
"pypdf>=3.17.0",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov>=4.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/SleepyPandas/Document-to-ContinuousPDF"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:39:19.138440 | seamless_pdf-1.0.3.tar.gz | 21,107 | 92/be/8a74e71082d94281c4040324cad15a710b93431b91b487caef325d2bb198/seamless_pdf-1.0.3.tar.gz | source | sdist | null | false | 8e7945a9fa017fe205d4446acb01b136 | f4fe8b903d231f6aaf53f53208bc4b32ae0531febb373cce50748912c2d5887e | 92be8a74e71082d94281c4040324cad15a710b93431b91b487caef325d2bb198 | null | [
"LICENSE"
] | 213 |
2.2 | cjm-fasthtml-web-audio | 0.0.1 | A reusable Web Audio API manager for FastHTML applications with multi-buffer support, parallel decode, and card-stack-compatible playback. | # cjm-fasthtml-web-audio
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
## Install
``` bash
pip install cjm_fasthtml_web_audio
```
## Project Structure
nbs/
├── components.ipynb # FastHTML component helpers for the Web Audio API manager
├── js.ipynb # JavaScript generation for the Web Audio API manager
└── models.ipynb # Configuration and HTML ID types for the Web Audio API manager
Total: 3 notebooks
## Module Dependencies
``` mermaid
graph LR
components[components<br/>components]
js[js<br/>js]
models[models<br/>models]
components --> models
components --> js
js --> models
```
*3 cross-module dependencies detected*
## CLI Reference
No CLI commands found in this project.
## Module Overview
Detailed documentation for each module in the project:
### components (`components.ipynb`)
> FastHTML component helpers for the Web Audio API manager
#### Import
``` python
from cjm_fasthtml_web_audio.components import (
render_audio_urls_input,
render_web_audio_script
)
```
#### Functions
``` python
def render_audio_urls_input(
config: WebAudioConfig, # Instance configuration
audio_urls: List[str], # Audio file URLs to load
oob: bool = False, # Whether to render as OOB swap
) -> Any: # Hidden input element with JSON-encoded URLs
"Render a hidden input storing audio URLs as JSON."
```
``` python
def render_web_audio_script(
config: WebAudioConfig, # Instance configuration
focus_input_id: str, # Hidden input ID for focused index
card_stack_id: str, # Card stack container ID
nav_down_btn_id: str = "", # Nav down button ID (for auto-navigate)
) -> Any: # Script element with complete Web Audio JS
"Render the complete Web Audio API script for a configured instance."
```
### js (`js.ipynb`)
> JavaScript generation for the Web Audio API manager
#### Import
``` python
from cjm_fasthtml_web_audio.js import (
generate_state_init,
generate_init_audio,
generate_stop_audio,
generate_play_segment,
generate_optional_features,
generate_focus_change,
generate_htmx_settle_handler,
generate_web_audio_js
)
```
#### Functions
``` python
def generate_state_init(
config: WebAudioConfig, # Instance configuration
) -> str: # JS code that initializes the state object
"Generate JS code to initialize the namespaced state object."
```
``` python
def generate_init_audio(
config: WebAudioConfig, # Instance configuration
) -> str: # JS init function
"Generate JS function that loads and decodes audio files in parallel."
```
``` python
def generate_stop_audio(
config: WebAudioConfig, # Instance configuration
) -> str: # JS stop function
"Generate JS function that stops current playback."
```
``` python
def generate_play_segment(
config: WebAudioConfig, # Instance configuration
nav_down_btn_id: str = "", # Nav down button ID (for auto-navigate)
) -> str: # JS play function
"Generate JS function that plays a segment from a specific buffer."
```
``` python
def generate_optional_features(
config: WebAudioConfig, # Instance configuration
) -> str: # JS for optional feature functions (empty if none enabled)
"Generate JS for optional features based on config flags."
```
``` python
def generate_focus_change(
config: WebAudioConfig, # Instance configuration
focus_input_id: str, # Hidden input ID for focused index
) -> str: # JS focus change callback
"Generate JS focus change callback for card stack integration."
```
``` python
def generate_htmx_settle_handler(
config: WebAudioConfig, # Instance configuration
card_stack_id: str, # Card stack container ID
) -> str: # JS HTMX afterSettle handler
"Generate HTMX afterSettle handler for card stack navigation."
```
``` python
def generate_web_audio_js(
config: WebAudioConfig, # Instance configuration
focus_input_id: str, # Hidden input ID for focused index
card_stack_id: str, # Card stack container ID
nav_down_btn_id: str = "", # Nav down button ID (for auto-navigate)
) -> str: # Complete JS code for this instance
"Generate the complete Web Audio API JS for a configured instance."
```
### models (`models.ipynb`)
> Configuration and HTML ID types for the Web Audio API manager
#### Import
``` python
from cjm_fasthtml_web_audio.models import (
WebAudioConfig,
WebAudioHtmlIds
)
```
#### Classes
``` python
@dataclass
class WebAudioConfig:
"Configuration for a Web Audio API manager instance."
namespace: str # Unique prefix (e.g., "align", "review")
indicator_selector: str # CSS selector for playing indicators
data_index_attr: str = 'audioFileIndex' # Data attr name for buffer index
data_start_attr: str = 'startTime' # Data attr name for start time
data_end_attr: str = 'endTime' # Data attr name for end time
enable_speed: bool = False # Playback speed support
enable_replay: bool = False # Replay current segment support
enable_auto_nav: bool = False # Auto-navigate on completion support
def ns(self) -> str: # Capitalized namespace for JS function names
"""Capitalized namespace for JS function names (e.g., 'align' -> 'Align')."""
return self.namespace.capitalize()
@property
def state_key(self) -> str: # JS state object key
"Capitalized namespace for JS function names (e.g., 'align' -> 'Align')."
def state_key(self) -> str: # JS state object key
"JS global state object key (e.g., '_webAudio_align')."
```
``` python
class WebAudioHtmlIds:
"HTML ID generators for Web Audio manager elements."
def audio_urls_input(
namespace: str # Instance namespace
) -> str: # HTML ID for audio URLs hidden input
"ID for the hidden input storing audio file URLs as JSON."
```
| text/markdown | Christian J. Mills | 9126128+cj-mills@users.noreply.github.com | null | null | Apache-2.0 | nbdev jupyter notebook python | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Natural Language :: English",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13"
] | [] | https://github.com/cj-mills/cjm-fasthtml-web-audio | null | >=3.12 | [] | [] | [] | [
"python-fasthtml",
"cjm-fasthtml-tailwind",
"cjm-fasthtml-daisyui",
"cjm-fasthtml-card-stack",
"cjm-fasthtml-keyboard-navigation",
"cjm-fasthtml-app-core"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:39:16.751437 | cjm_fasthtml_web_audio-0.0.1.tar.gz | 14,132 | f5/de/c7f5738074086210447298208e752ff5e41840f68590103b7489290be30c/cjm_fasthtml_web_audio-0.0.1.tar.gz | source | sdist | null | false | 08e9eba336e2c5fe79953822dc95ef22 | 607cf2e73c2bc4bde14056ac5e3100f8b5a44b893b985744fd32aeb68dfd2803 | f5dec7f5738074086210447298208e752ff5e41840f68590103b7489290be30c | null | [] | 224 |
2.4 | sanicode | 0.1.0 | AI-assisted code sanitization scanner with OWASP ASVS, NIST 800-53, and ASD STIG compliance mapping. | # Sanicode
Sanicode scans Python codebases for input validation and sanitization gaps, builds a knowledge graph of data flow (entry points, sanitizers, sinks), and maps every finding to OWASP ASVS 5.0, NIST 800-53, and ASD STIG v4r11 controls. Output formats include SARIF (for GitHub Code Scanning integration), JSON, and Markdown.
Unlike pattern-only tools like Bandit or Semgrep, sanicode constructs a data flow graph so findings carry context about *how* tainted data reaches a sink and *whether* sanitization exists along the path.
## Install
```
pip install sanicode
```
Requires Python 3.10+.
## Quick start
Scan a codebase and generate a Markdown report:
```
sanicode scan .
```
Generate SARIF output for CI integration:
```
sanicode scan . -f sarif
```
Reports are written to `sanicode-reports/` by default.
## API server
Start the FastAPI server for remote or hybrid scan mode:
```
sanicode serve
```
This starts on port 8080 with Prometheus metrics at `/metrics`.
### Endpoints
```
POST /api/v1/scan Submit a scan (async)
GET /api/v1/scan/{id} Poll scan status
GET /api/v1/scan/{id}/findings Retrieve findings (JSON or ?format=sarif)
GET /api/v1/scan/{id}/graph Retrieve knowledge graph
POST /api/v1/analyze Instant snippet analysis
GET /api/v1/compliance/map Compliance framework lookup
GET /api/v1/health Liveness check
GET /metrics Prometheus metrics
```
## CLI commands
```
sanicode scan . # Scan codebase, generate reports
sanicode scan . -f sarif # SARIF output
sanicode scan . -f json -f sarif # Multiple formats
sanicode serve # Start API server on :8080
sanicode report scan-result.json # Re-generate reports from saved results
sanicode report scan-result.json -s high # Filter by severity
sanicode report scan-result.json --cwe 89 # Filter by CWE
sanicode config --show # Show resolved configuration
sanicode config --init # Create starter sanicode.toml
sanicode graph . --export graph.json # Export knowledge graph
```
## Detection rules
| Rule | Description | CWE |
|--------|----------------------------------|---------|
| SC001 | `eval()` | CWE-78 |
| SC002 | `exec()` | CWE-78 |
| SC003 | `os.system()` | CWE-78 |
| SC004 | `subprocess` with `shell=True` | CWE-78 |
| SC005 | `pickle.loads()` | CWE-502 |
| SC006 | SQL string formatting | CWE-89 |
| SC007 | `__import__()` | CWE-94 |
| SC008 | `yaml.load()` without `Loader` | CWE-502 |
Each finding is enriched with CWE metadata and mapped to the active compliance profiles.
## Compliance frameworks
Sanicode maps findings to three frameworks out of the box:
- **OWASP ASVS 5.0** -- V1: Encoding and Sanitization requirements (L1/L2/L3)
- **NIST 800-53** -- SI-10 (Information Input Validation), SI-15 (Information Output Filtering), and related controls
- **ASD STIG v4r11** -- APSC-DV-002510 (CAT I), APSC-DV-002520 (CAT II), APSC-DV-002530 (CAT II), and related checks
## Configuration
Create a config file:
```
sanicode config --init
```
This writes a `sanicode.toml` in the current directory. Config is loaded from (in order):
1. `--config` flag
2. `sanicode.toml` in the current directory
3. `~/.config/sanicode/config.toml`
Sanicode works fully without any configuration. LLM tiers are optional -- without them, the tool runs in degraded mode using AST pattern matching, knowledge graph construction, and compliance lookups. LLM integration adds context-aware reasoning on top of these.
### LLM tiers (optional)
The config supports three tiers for different task complexities, each pointing at any OpenAI-compatible endpoint (Ollama, vLLM, OpenShift AI):
| Tier | Purpose | Recommended model |
|-------------|-----------------------------------|-------------------------|
| `fast` | Classification, severity scoring | Granite Nano, Mistral 7B |
| `analysis` | Data flow reasoning | Granite Code 8B |
| `reasoning` | Compliance mapping, reports | Llama 3.1 70B |
## Current status
Phase 1 MVP: Python-only scanning, 8 detection rules, local and API server modes. LLM integration is planned but not yet wired; the tool operates in degraded mode with AST patterns and compliance mapping.
## License
Apache-2.0
| text/markdown | Sanicode Contributors | null | null | null | Apache-2.0 | compliance, llm, owasp, sast, security, stig | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Security"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"fastapi>=0.100",
"litellm>=1.0",
"networkx>=3.0",
"prometheus-client>=0.17",
"rich>=13.0",
"tomli>=2.0; python_version < \"3.11\"",
"typer>=0.9.0",
"uvicorn[standard]>=0.20",
"build>=1.0; extra == \"dev\"",
"httpx>=0.24; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\"",
"pytest-cov... | [] | [] | [] | [
"Homepage, https://github.com/rdwj/sanicode",
"Repository, https://github.com/rdwj/sanicode",
"Issues, https://github.com/rdwj/sanicode/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:37:47.026801 | sanicode-0.1.0.tar.gz | 86,132 | 47/4b/154fecd5387a075edf2bd741278fdbb66df1f31a88ed562f63504b79e110/sanicode-0.1.0.tar.gz | source | sdist | null | false | 99d1291fe2f3d6d531b9f1b79516b450 | 9819e8950d821f0ca56cf88e672065ff480a3c470389db02609d79fe767a83c9 | 474b154fecd5387a075edf2bd741278fdbb66df1f31a88ed562f63504b79e110 | null | [] | 238 |
2.4 | hvrt | 2.1.1 | Hierarchical Variance-Retaining Transformer (HVRT) — variance-aware sample transformation for tabular data | # HVRT: Hierarchical Variance-Retaining Transformer
[](https://pypi.org/project/hvrt/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
Variance-aware sample transformation for tabular data: reduce, expand, or augment.
---
## Overview
HVRT partitions a dataset into variance-homogeneous regions via a decision tree fitted on a synthetic extremeness target, then applies a configurable per-partition operation (selection for reduction, sampling for expansion). The tree is fitted once; `reduce()`, `expand()`, and `augment()` all draw from the same fitted model.
| Operation | Method | Description |
|---|---|---|
| **Reduce** | `model.reduce(ratio=0.3)` | Select a geometrically diverse representative subset |
| **Expand** | `model.expand(n=50000)` | Generate synthetic samples via per-partition KDE or other strategy |
| **Augment** | `model.augment(n=15000)` | Concatenate original data with synthetic samples |
---
## Algorithm
### 1. Z-score normalisation
```
X_z = (X - μ) / σ per feature
```
Categorical features are integer-encoded then z-scored.
### 2. Synthetic target construction
**HVRT** — sum of normalised pairwise feature interactions:
```
For all feature pairs (i, j):
interaction = X_z[:,i] ⊙ X_z[:,j]
normalised = (interaction - mean) / std
target = sum of all normalised interaction columns O(n · d²)
```
**FastHVRT** — sum of z-scores per sample:
```
target_i = Σ_j X_z[i, j] O(n · d)
```
### 3. Partitioning
A `DecisionTreeRegressor` is fitted on the synthetic target. Leaves form variance-homogeneous partitions. Tree depth and leaf size are auto-tuned to dataset size.
### 4. Per-partition operations
**Reduce:** Select representatives within each partition using the chosen [selection strategy](#selection-strategies). Budget is proportional to partition size (`variance_weighted=False`) or biased toward high-variance partitions (`variance_weighted=True`).
**Expand:** Draw synthetic samples within each partition using the chosen [generation strategy](#generation-strategies). Budget allocation follows the same logic.
---
## Installation
```bash
pip install hvrt
```
```bash
git clone https://github.com/hotprotato/hvrt.git
cd hvrt
pip install -e .
```
---
## Quick Start
```python
from hvrt import HVRT, FastHVRT
# Fit once — reduce and expand from the same model
model = HVRT(random_state=42).fit(X_train, y_train) # y optional
X_reduced, idx = model.reduce(ratio=0.3, return_indices=True)
X_synthetic = model.expand(n=50000)
X_augmented = model.augment(n=15000)
# FastHVRT — O(n·d) target; preferred for expansion
model = FastHVRT(random_state=42).fit(X_train)
X_synthetic = model.expand(n=50000)
```
---
## API Reference
### `HVRT`
```python
from hvrt import HVRT
model = HVRT(
n_partitions=None, # Max tree leaves; auto-tuned if None
min_samples_leaf=None, # Min samples per leaf; auto-tuned if None
y_weight=0.0, # 0.0 = unsupervised; 1.0 = y drives splits
bandwidth=0.5, # Default KDE bandwidth for expand()
auto_tune=True,
random_state=42,
# Pipeline params (see Pipeline section)
reduce_params=None,
expand_params=None,
augment_params=None,
)
```
Target: sum of normalised pairwise feature interactions. O(n · d²). Preferred for reduction.
### `FastHVRT`
```python
from hvrt import FastHVRT
model = FastHVRT(bandwidth=0.5, random_state=42)
```
Target: sum of z-scores. O(n · d). Equivalent quality to HVRT for expansion. All constructor parameters identical to HVRT.
### `fit`
```python
model.fit(X, y=None, feature_types=None)
# feature_types: list of 'continuous' or 'categorical' per column
```
### `reduce`
```python
X_reduced = model.reduce(
n=None, # Absolute target count
ratio=None, # Proportional (e.g. 0.3 = keep 30%)
method='fps', # Selection strategy; see Selection Strategies
variance_weighted=True, # Oversample high-variance partitions
return_indices=False,
n_partitions=None, # Override tree granularity for this call only
)
```
### `expand`
```python
X_synth = model.expand(
n=10000,
variance_weighted=False, # True = oversample tails
bandwidth=None, # Override instance bandwidth
adaptive_bandwidth=False, # Scale bandwidth with local expansion ratio
generation_strategy=None, # See Generation Strategies
return_novelty_stats=False,
n_partitions=None,
)
```
`adaptive_bandwidth=True` uses per-partition bandwidth `bw_p = scott_p × max(1, budget_p/n_p)^(1/d)`.
### `augment`
```python
X_aug = model.augment(n=15000, variance_weighted=False)
# n must exceed len(X); returns original X concatenated with (n - len(X)) synthetic samples
```
### Utility methods
```python
partitions = model.get_partitions()
# [{'id': 5, 'size': 120, 'mean_abs_z': 0.84, 'variance': 1.2}, ...]
novelty = model.compute_novelty(X_new) # min z-space distance per point
params = HVRT.recommend_params(X) # {'n_partitions': 180, ...}
```
---
## sklearn Pipeline
Operation parameters are declared at construction time via `ReduceParams`, `ExpandParams`, or `AugmentParams`. The tree is fitted once during `fit()`; `transform()` calls the corresponding operation.
```python
from hvrt import HVRT, FastHVRT, ReduceParams, ExpandParams, AugmentParams
from sklearn.pipeline import Pipeline
# Reduce
pipe = Pipeline([('hvrt', HVRT(reduce_params=ReduceParams(ratio=0.3)))])
X_red = pipe.fit_transform(X, y)
# Expand
pipe = Pipeline([('hvrt', FastHVRT(expand_params=ExpandParams(n=50000)))])
X_synth = pipe.fit_transform(X)
# Augment
pipe = Pipeline([('hvrt', HVRT(augment_params=AugmentParams(n=15000)))])
X_aug = pipe.fit_transform(X)
```
Alternatively, import from `hvrt.pipeline` to make the intent explicit:
```python
from hvrt.pipeline import HVRT, ReduceParams
```
### ReduceParams
```python
ReduceParams(
n=None,
ratio=None, # e.g. 0.3
method='fps',
variance_weighted=True,
return_indices=False,
n_partitions=None,
)
```
### ExpandParams
```python
ExpandParams(
n=50000, # required
variance_weighted=False,
bandwidth=None,
adaptive_bandwidth=False,
generation_strategy=None,
return_novelty_stats=False,
n_partitions=None,
)
```
### AugmentParams
```python
AugmentParams(
n=15000, # required; must exceed len(X)
variance_weighted=False,
n_partitions=None,
)
```
---
## Generation Strategies
```python
from hvrt import FastHVRT, univariate_kde_copula
model = FastHVRT(random_state=42).fit(X)
# By name
X_synth = model.expand(n=10000, generation_strategy='bootstrap_noise')
# By reference
X_synth = model.expand(n=10000, generation_strategy=univariate_kde_copula)
# Custom callable
def my_strategy(X_z, partition_ids, unique_partitions, budgets, random_state):
...
return X_synthetic # shape (sum(budgets), n_features), z-score space
X_synth = model.expand(n=10000, generation_strategy=my_strategy)
```
| Strategy | Behaviour | Notes |
|---|---|---|
| `'multivariate_kde'` | `scipy.stats.gaussian_kde` on all features jointly. Scott's rule. **Default.** | Captures full joint covariance |
| `'univariate_kde_copula'` | Per-feature 1-D KDE marginals + Gaussian copula. | More flexible per-feature marginals |
| `'bootstrap_noise'` | Resample with replacement + Gaussian noise at 10% of per-feature std. | Fastest; no distributional assumptions |
```python
from hvrt import BUILTIN_GENERATION_STRATEGIES
list(BUILTIN_GENERATION_STRATEGIES)
# ['multivariate_kde', 'univariate_kde_copula', 'bootstrap_noise']
```
---
## Selection Strategies
```python
from hvrt import HVRT
model = HVRT(random_state=42).fit(X, y)
X_red = model.reduce(ratio=0.2, method='fps') # default
X_red = model.reduce(ratio=0.2, method='medoid_fps')
X_red = model.reduce(ratio=0.2, method='variance_ordered')
X_red = model.reduce(ratio=0.2, method='stratified')
# Custom callable
def my_selector(X_z, partition_ids, unique_partitions, budgets, random_state):
...
return selected_indices # global indices into X
X_red = model.reduce(ratio=0.2, method=my_selector)
```
| Strategy | Behaviour |
|---|---|
| `'fps'` / `'centroid_fps'` | Greedy Furthest Point Sampling seeded at partition centroid. **Default.** |
| `'medoid_fps'` | FPS seeded at the partition medoid. |
| `'variance_ordered'` | Select samples with highest local k-NN variance (k=10). |
| `'stratified'` | Random sample within each partition. |
---
## Benchmarks
### Sample reduction
Metric: GBM ROC-AUC on reduced training set as % of full-training-set AUC.
n=3 000 train / 2 000 test, seed=42.
| Scenario | Retention | HVRT-fps | HVRT-yw | Random | Stratified |
|---|---|---|---|---|---|
| Well-behaved (Gaussian, no noise) | 10% | 97.1% | 98.1% | 96.9% | 98.0% |
| Well-behaved (Gaussian, no noise) | 20% | 98.7% | 98.9% | 98.3% | 99.0% |
| Noisy labels (20% random flip) | 10% | **96.1%** | 91.1% | 93.3% | 90.4% |
| Noisy labels (20% random flip) | 20% | **95.2%** | 95.9% | 93.1% | 93.1% |
| Heavy-tail + label noise + junk features | 30% | **98.2%** | 98.2% | 94.3% | 95.2% |
| Rare events (5% positive class) | 10% | 98.0% | **99.4%** | 86.5% | 94.1% |
| Rare events (5% positive class) | 20% | 98.0% | **100.4%** | 97.9% | 99.0% |
*HVRT-fps: `method='fps'`, `variance_weighted=True`. HVRT-yw: same + `y_weight=0.3`.*
Reproduce: `python benchmarks/reduction_denoising_benchmark.py`
### Synthetic data expansion
Metric: discriminator accuracy (target 50% = indistinguishable), marginal KS fidelity, tail MSE.
bandwidth=0.5, synthetic-to-real ratio 1×.
| Method | Marginal Fidelity | Discriminator | Tail Error | Fit time |
|---|---|---|---|---|
| **HVRT** | 0.974 | **49.6%** | **0.004** | 0.07 s |
| Gaussian Copula | 0.998 | 49.4% | 0.017 | 0.02 s |
| GMM (k=10) | 0.989 | 49.2% | 0.093 | 1.06 s |
| Bootstrap + Noise | 0.994 | 49.7% | 0.131 | 0.00 s |
| SMOTE | 1.000 | 48.6% | 0.000 | 0.00 s |
| CTGAN† | 0.920 | 55.8% | 0.500 | 45 s |
| TVAE† | 0.940 | 53.5% | 0.450 | 40 s |
| TabDDPM† | 0.960 | 52.0% | 0.300 | 120 s |
| MOSTLY AI† | 0.975 | 51.0% | 0.150 | 60 s |
*† Published numbers. Discriminator = 50% is ideal. Tail error = 0 is ideal.*
Reproduce: `python benchmarks/run_benchmarks.py --tasks expand`
---
## Benchmarking Scripts
```bash
python benchmarks/run_benchmarks.py
python benchmarks/run_benchmarks.py --tasks reduce --datasets adult housing
python benchmarks/run_benchmarks.py --tasks expand
python benchmarks/reduction_denoising_benchmark.py
python benchmarks/adaptive_kde_benchmark.py
python benchmarks/adaptive_full_benchmark.py
python benchmarks/heart_disease_benchmark.py # requires: pip install ctgan
python benchmarks/bootstrap_failure_benchmark.py
```
---
## Backward Compatibility
The v1 API is still importable:
```python
from hvrt import HVRTSampleReducer, AdaptiveHVRTReducer
reducer = HVRTSampleReducer(reduction_ratio=0.2, random_state=42)
X_reduced, y_reduced = reducer.fit_transform(X, y)
```
The `mode` constructor parameter is deprecated. Replace with params objects:
```python
# Deprecated
HVRT(mode='reduce')
# Replacement
HVRT(reduce_params=ReduceParams(ratio=0.3))
```
---
## Testing
```bash
pytest
pytest --cov=hvrt --cov-report=term-missing
```
---
## Citation
```bibtex
@software{hvrt2026,
author = {Peace, Jake},
title = {HVRT: Hierarchical Variance-Retaining Transformer},
year = {2026},
url = {https://github.com/hotprotato/hvrt}
}
```
---
## License
MIT License — see [LICENSE](LICENSE).
## Acknowledgments
Development assisted by Claude (Anthropic).
| text/markdown | null | Jake Peace <mail@jakepeace.me> | null | null | null | machine-learning, sample-reduction, synthetic-data, data-augmentation, data-preprocessing, variance, kde, tabular-data, heavy-tailed | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyth... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.20.0",
"scikit-learn>=1.0.0",
"scipy>=1.7.0",
"xgboost>=1.5; extra == \"benchmarks\"",
"matplotlib>=3.5; extra == \"benchmarks\"",
"pandas>=1.3; extra == \"benchmarks\"",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=3.0.0; extra == \"dev\"",
"black>=22.0.0; extra == \"dev\"",
"mypy>=0... | [] | [] | [] | [
"Homepage, https://github.com/hotprotato/hvrt",
"Documentation, https://github.com/hotprotato/hvrt#readme",
"Repository, https://github.com/hotprotato/hvrt",
"Issues, https://github.com/hotprotato/hvrt/issues"
] | twine/6.2.0 CPython/3.13.5 | 2026-02-21T05:36:47.319255 | hvrt-2.1.1.tar.gz | 56,652 | 0c/a4/fd848f1b56605623df7235c780c7a4ef8aabc231619051cc488540009ac4/hvrt-2.1.1.tar.gz | source | sdist | null | false | 300c9f6beca3bb41da95a481c60a18f7 | 38228e869ad616756183f9ccd89bdca5aef70ca3d574c0dbfee0375219bd7c8e | 0ca4fd848f1b56605623df7235c780c7a4ef8aabc231619051cc488540009ac4 | MIT | [
"LICENSE"
] | 244 |
2.3 | uplang | 2.0.1 | Command-line tool for updating MC Java modpack language files | # UpLang
**UpLang** 是一个专为 Minecraft Java 版整合包开发者和汉化者设计的命令行工具,旨在帮助高效地提取、管理和更新 Mod 的本地化语言文件(主要是 `en_us` 和 `zh_cn`)。
如果在制作整合包的过程中,你苦恼于各个 Mod 频繁更新导致的语言键值(Key)变动,或者希望能更方便地维护一个集成的汉化资源包,那么 UpLang 将是你的得力助手。
## 🌟 主要功能
- 📦 **语言文件提取 (`init`)**:从一堆指定的 Mod `.jar` 文件中快速提取 `en_us.json` 和 `zh_cn.json` 到目标资源包目录。
- 🔄 **智能差异同步 (`update`)**:在 Mod 升级后,自动比对新旧 `en_us` 的差异(新增、修改、删除的键),并将这些变动智能同步到你的资源包中,确保汉化文件与最新 Mod 保持一致,避免汉化失效。
- 📥 **翻译便捷导入 (`import`)**:支持从其他的本地资源包目录或 `.zip` 压缩包中批量导入既有的 `zh_cn` 汉化成果,覆盖尚未翻译或过时的词条。
- 🛠️ **强大的 JSON 容错解析**:内置兼容性极强的 JSON 解析器,可完美处理带有注释(`//`, `#`)、多余逗号、花括号不匹配以及各种非 `UTF-8` 编码的非标准 Minecraft 语言文件。
- ⚡ **多线程极速处理**:支持指定并发参数 (`--workers`),快速扫描和解析数以百计的 Mod。
## 📥 安装
要求环境:**Python >= 3.12**。
你可以通过 `pip` 或者 `uv` 安装:
```bash
# 使用 pip (已发布至 PyPI)
pip install uplang
# 推荐:使用 uv 运行(无需全局安装)
uv tool install uplang
# 或者
uvx uplang [COMMAND]
```
## 🚀 使用指南
UpLang 的核心在于维护一个整合了各个 Mod 语言文件的**资源包(Resource Pack)**的 `assets` 目录。
### 1. 初始化提取 (`init`)
第一次使用时,将整合包内的所有 Mod 语言文件一键提取到你的资源包 `assets` 目录下:
```bash
uplang init <MODS文件夹路径> <资源包ASSETS夹路径> [--workers 线程数]
```
**示例:**
```bash
uplang init .minecraft/mods .minecraft/resourcepacks/MyTranslationPack/assets --workers 8
```
这一步会在 `assets/` 目录下按 `{mod_id}/lang/{locale}.json` 的结构生成原文件。
### 2. 同步更新与维护 (`update`)
当你更新了整合包里的某些 Mod 后,原有的汉化可能会因为键名(Key)改变、新物品加入而失效。此时可使用 `update` 指令:
```bash
uplang update <MODS文件夹路径> <资源包ASSETS夹路径> [--workers 线程数]
```
UpLang 会自动解析 `.jar` 中最新的 `en_us` 文件,与资源包中旧的 `en_us` 进行比对,然后自动:
- **删除** 已被 Mod 作者弃用的冗余语言键。
- **添加** Mod 新增物品/内容的语言键。
- **更新** 原版英文已发生实质变更的词条。
并将这些差异安全地同步到对应的 `zh_cn.json` 文件中,方便汉化者跟进。
### 3. 导入现成汉化 (`import`)
如果找到了社区里的汉化资源包或 `.zip`,你可以直接将其中的 `zh_cn` 映射合并到你的现有工作区:
```bash
uplang import <当前的ASSETS夹路径> <导入来源的资源包路径或ZIP>
```
**示例(导入资源包夹):**
```bash
uplang import ./pack/assets ./other_translation_pack/assets
```
**示例(导入 ZIP 汉化包):**
```bash
uplang import ./pack/assets community_translations.zip
```
UpLang 会自动寻找匹配的 `mod_id`,并智能替换资源包里尚未翻译的英文词条。
## ⚙️ 进阶特性
- **UTF-8 CJK 保护**:在处理语言读写时,UpLang 严格保障 CJK(中日韩)字符以及 Unicode 代理对(Surrogates)、私用区(Private-use)字符的安全传输和转义,确保生成的内容随时能被 MC 引擎正常读取。
## 📄 许可
本项目采用开源许可,详情请参阅 `LICENSE` 文件。
| text/markdown | QianFuv | QianFuv <qianfuv@qq.com> | null | null | Copyright <2025> <QianFuv>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.3.1",
"orjson>=3.11.7",
"mypy>=1.18.2; extra == \"dev\"",
"ruff>=0.14.6; extra == \"dev\""
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:34:54.402958 | uplang-2.0.1.tar.gz | 13,359 | d5/4c/5c69fb744c971ca613644b5724fbd8693466b9e7f9d0dd8df8bf50a5302d/uplang-2.0.1.tar.gz | source | sdist | null | false | e5684e3819043e4a814238a4c6d75695 | 12c565f63d17681fda5f909958b2807402407f5f9b40bdb22c83d9e380424659 | d54c5c69fb744c971ca613644b5724fbd8693466b9e7f9d0dd8df8bf50a5302d | null | [] | 206 |
2.4 | licenses-deny | 0.1.5 | A Python package that audits package licenses and provenance against user-defined allow/deny policies. | # licenses-deny
Simple CLI to inspect Python environment dependencies for license compliance, banned packages, and allowed sources.
## Requirements
- Python 3.11+
- Virtual environment activated before running checks (required by the tool)
## Installation
```bash
pip install licenses-deny
```
## Usage
```bash
# Initialize template configuration near project root
licenses-deny init
# List installed packages with detected license/source
licenses-deny list
# List and include raw license strings when they differ from the normalized value
licenses-deny list --show-raw-license
# Run checks (licenses + bans + sources)
licenses-deny check
# Run only license checks in strict mode
licenses-deny check licenses --strict
```
## Development
```bash
# Install in editable mode
pip install -e .
# Run CLI directly from source
python -m licenses_deny --help
```
| text/markdown | null | null | null | null | null | license, compliance, dependencies | [
"Natural Language :: English",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14",
"Operating System... | [] | null | null | >=3.11 | [] | [] | [] | [
"packaging>=23.2"
] | [] | [] | [] | [
"Issue Tracker, https://github.com/John2416/licenses-deny/issues",
"Source, https://github.com/John2416/licenses-deny"
] | uv/0.8.3 | 2026-02-21T05:34:24.071750 | licenses_deny-0.1.5.tar.gz | 16,355 | cb/4c/0852b41698425616015254c89cf0bcca6234f3b899f7817d68a4aa32e36f/licenses_deny-0.1.5.tar.gz | source | sdist | null | false | c8f00bd3761116a044f2f5a4c673f32d | b4f1c54803baf2dd33c6ed92fc289cd99daf34e96c14257fe21a632b72dc047a | cb4c0852b41698425616015254c89cf0bcca6234f3b899f7817d68a4aa32e36f | Apache-2.0 | [
"LICENSE"
] | 214 |
2.4 | mcp-vector-search | 2.6.0 | CLI-first semantic code search with MCP integration and interactive D3.js visualization for exploring code relationships | # MCP Vector Search
🔍 **CLI-first semantic code search with MCP integration**
[](https://badge.fury.io/py/mcp-vector-search)
[](https://www.python.org/downloads/)
[](LICENSE)
> ⚠️ **Production Release (v2.5.56)**: Stable and actively maintained. LanceDB is now the default backend for better performance and stability.
A modern, fast, and intelligent code search tool that understands your codebase through semantic analysis and AST parsing. Built with Python, powered by LanceDB, and designed for developer productivity.
## ✨ Features
### 🚀 **Core Capabilities**
- **Semantic Search**: Find code by meaning, not just keywords
- **AST-Aware Parsing**: Understands code structure (functions, classes, methods)
- **Multi-Language Support**: 13 languages - Python, JavaScript, TypeScript, C#, Dart/Flutter, PHP, Ruby, Java, Go, Rust, HTML, and Markdown/Text (with extensible architecture)
- **Knowledge Graph**: Temporal knowledge graph with KuzuDB for entity extraction and relationship mapping (`kg build`, `kg status`, `kg query`)
- **Interactive Visualization**: D3.js-powered visualization with 5+ views (Treemap, Sunburst, Force Graph, Knowledge Graph, Heatmap)
- **Development Narratives**: Generate git history narratives with `story` command (markdown, JSON, HTML output)
- **Real-time Indexing**: File watching with automatic index updates
- **Automatic Version Tracking**: Smart reindexing on tool upgrades
- **Local-First**: Complete privacy with on-device processing
- **Zero Configuration**: Auto-detects project structure and languages
### 🛠️ **Developer Experience**
- **CLI-First Design**: Simple commands for immediate productivity
- **Rich Output**: Syntax highlighting, similarity scores, context
- **Fast Performance**: Sub-second search responses, efficient indexing with pipeline parallelism (37% faster)
- **Modern Architecture**: Async-first, type-safe, modular design
- **Semi-Automatic Reindexing**: Multiple strategies without daemon processes
- **17 MCP Tools**: Comprehensive MCP integration for AI assistants (search, analysis, documentation, KG, story generation)
- **Chat Mode**: LLM-powered code Q&A with iterative refinement (up to 30 queries), deep search, and KG query tools
- **CodeT5+ Embeddings**: Code-specific embeddings via `index-code` command (Salesforce/codet5p-110m-embedding)
### 🔧 **Technical Features**
- **Vector Database**: LanceDB (serverless, file-based)
- **Embedding Models**: Configurable sentence transformers with GPU acceleration
- **Smart Reindexing**: Search-triggered, Git hooks, scheduled tasks, and manual options
- **Extensible Parsers**: Plugin architecture for new languages
- **Configuration Management**: Project-specific settings
- **Production Ready**: Write buffering, auto-indexing, comprehensive error handling
- **Performance**: Apple Silicon M4 Max optimizations (2-4x speedup with MPS)
## 🚀 Quick Start
### Installation
```bash
# Install from PyPI (recommended)
pip install mcp-vector-search
# Or with UV (faster)
uv pip install mcp-vector-search
# Or install from source
git clone https://github.com/bobmatnyc/mcp-vector-search.git
cd mcp-vector-search
uv sync && uv pip install -e .
```
**Verify Installation:**
```bash
# Check that all dependencies are installed correctly
mcp-vector-search doctor
# Should show all ✓ marks
# If you see missing dependencies, try:
pip install --upgrade mcp-vector-search
```
### Zero-Config Setup (Recommended)
The fastest way to get started - **completely hands-off, just one command**:
```bash
# Smart zero-config setup (recommended)
mcp-vector-search setup
```
**What `setup` does automatically:**
- ✅ Detects your project's languages and file types
- ✅ Initializes semantic search with optimal settings
- ✅ Indexes your entire codebase
- ✅ Configures ALL installed MCP platforms (Claude Code, Cursor, etc.)
- ✅ **Uses native Claude CLI integration** (`claude mcp add`) when available
- ✅ **Falls back to `.mcp.json`** if Claude CLI not available
- ✅ Sets up file watching for auto-reindex
- ✅ **Zero user input required!**
**Behind the scenes:**
- **Server name**: `mcp` (for consistency with other MCP projects)
- **Command**: `uv run python -m mcp_vector_search.mcp.server {PROJECT_ROOT}`
- **File watching**: Enabled via `MCP_ENABLE_FILE_WATCHING=true`
- **Integration method**: Native `claude mcp add` (or `.mcp.json` fallback)
**Example output:**
```
🚀 Smart Setup for mcp-vector-search
🔍 Detecting project...
✅ Found 3 language(s): Python, JavaScript, TypeScript
✅ Detected 8 file type(s)
✅ Found 2 platform(s): claude-code, cursor
⚙️ Configuring...
✅ Embedding model: sentence-transformers/all-MiniLM-L6-v2
🚀 Initializing...
✅ Vector database created
✅ Configuration saved
🔍 Indexing codebase...
✅ Indexing completed in 12.3s
🔗 Configuring MCP integrations...
✅ Using Claude CLI for automatic setup
✅ Registered with Claude CLI
✅ Configured 2 platform(s)
🎉 Setup Complete!
```
**Options:**
```bash
# Force re-setup
mcp-vector-search setup --force
# Verbose output for debugging (shows Claude CLI commands)
mcp-vector-search setup --verbose
```
### Advanced Setup Options
For more control over the installation process:
```bash
# Manual setup with MCP integration
mcp-vector-search install --with-mcp
# Custom file extensions
mcp-vector-search install --extensions .py,.js,.ts,.dart
# Skip automatic indexing
mcp-vector-search install --no-auto-index
# Just initialize (no indexing or MCP)
mcp-vector-search init
```
### Add MCP Integration for AI Tools
**Automatic (Recommended):**
```bash
# One command sets up all detected platforms
mcp-vector-search setup
```
**Manual Platform Installation:**
```bash
# Add Claude Code integration (project-scoped)
mcp-vector-search install claude-code
# Add Cursor IDE integration (global)
mcp-vector-search install cursor
# See all available platforms
mcp-vector-search install list
```
**Note**: The `setup` command uses native `claude mcp add` when Claude CLI is available, providing better integration than manual `.mcp.json` creation.
### Remove MCP Integrations
```bash
# Remove specific platform
mcp-vector-search uninstall claude-code
# Remove all integrations
mcp-vector-search uninstall --all
# List configured integrations
mcp-vector-search uninstall list
```
### Basic Usage
```bash
# Search your code
mcp-vector-search search "authentication logic"
mcp-vector-search search "database connection setup"
mcp-vector-search search "error handling patterns"
# Index your codebase (if not done during setup)
mcp-vector-search index
# Index with code-specific embeddings (CodeT5+)
mcp-vector-search index-code
# Check project status
mcp-vector-search status
# Start file watching (auto-update index)
mcp-vector-search watch
# Interactive visualization (5+ views)
mcp-vector-search visualize
# Generate development narrative from git history
mcp-vector-search story
# Knowledge graph operations
mcp-vector-search kg build
mcp-vector-search kg status
mcp-vector-search kg query "find all Python functions"
# Chat mode with LLM
mcp-vector-search chat "explain the authentication flow"
# Code analysis
mcp-vector-search analyze complexity
mcp-vector-search analyze dead-code
```
### Smart CLI with "Did You Mean" Suggestions
The CLI includes intelligent command suggestions for typos:
```bash
# Typos are automatically detected and corrected
$ mcp-vector-search serach "auth"
No such command 'serach'. Did you mean 'search'?
$ mcp-vector-search indx
No such command 'indx'. Did you mean 'index'?
```
See [docs/guides/cli-usage.md](docs/guides/cli-usage.md) for more details.
## Versioning & Releasing
This project uses semantic versioning with an automated release workflow.
### Quick Commands
- `make version-show` - Display current version
- `make release-patch` - Create patch release
- `make publish` - Publish to PyPI
See [docs/development/versioning.md](docs/development/versioning.md) for complete documentation.
## 📖 Documentation
### Commands
#### `setup` - Zero-Config Smart Setup (Recommended)
```bash
# One command to do everything (recommended)
mcp-vector-search setup
# What it does automatically:
# - Detects project languages and file types
# - Initializes semantic search
# - Indexes entire codebase
# - Configures all detected MCP platforms
# - Sets up file watching
# - Zero configuration needed!
# Force re-setup
mcp-vector-search setup --force
# Verbose output for debugging
mcp-vector-search setup --verbose
```
**Key Features:**
- **Zero Configuration**: No user input required
- **Smart Detection**: Automatically discovers languages and platforms
- **Comprehensive**: Handles init + index + MCP setup in one command
- **Idempotent**: Safe to run multiple times
- **Fast**: Timeout-protected scanning (won't hang on large projects)
- **Team-Friendly**: Commit `.mcp.json` to share configuration
**When to use:**
- ✅ First-time project setup
- ✅ Team onboarding
- ✅ Quick testing in new codebases
- ✅ Setting up multiple MCP platforms at once
#### `install` - Install Project and MCP Integrations (Advanced)
```bash
# Manual setup with more control
mcp-vector-search install
# Install with all MCP integrations
mcp-vector-search install --with-mcp
# Custom file extensions
mcp-vector-search install --extensions .py,.js,.ts
# Skip automatic indexing
mcp-vector-search install --no-auto-index
# Platform-specific MCP integration
mcp-vector-search install claude-code # Project-scoped
mcp-vector-search install cursor # Global
mcp-vector-search install windsurf # Global
mcp-vector-search install vscode # Global
# List available platforms
mcp-vector-search install list
```
**When to use:**
- Use `install` when you need fine-grained control over extensions, models, or MCP platforms
- Use `setup` for quick, zero-config onboarding (recommended)
#### `uninstall` - Remove MCP Integrations
```bash
# Remove specific platform
mcp-vector-search uninstall claude-code
# Remove all integrations
mcp-vector-search uninstall --all
# List configured integrations
mcp-vector-search uninstall list
# Skip backup creation
mcp-vector-search uninstall claude-code --no-backup
# Alias (same as uninstall)
mcp-vector-search remove claude-code
```
#### `init` - Initialize Project (Simple)
```bash
# Basic initialization (no indexing or MCP)
mcp-vector-search init
# Custom configuration
mcp-vector-search init --extensions .py,.js,.ts --embedding-model sentence-transformers/all-MiniLM-L6-v2
# Force re-initialization
mcp-vector-search init --force
```
**Note**: For most users, use `setup` instead of `init`. The `init` command is for advanced users who want manual control.
#### `index` - Index Codebase
```bash
# Index all files
mcp-vector-search index
# Index specific directory
mcp-vector-search index /path/to/code
# Force re-indexing
mcp-vector-search index --force
# Reindex entire project
mcp-vector-search index reindex
# Reindex entire project (explicit)
mcp-vector-search index reindex --all
# Reindex entire project without confirmation
mcp-vector-search index reindex --force
# Reindex specific file
mcp-vector-search index reindex path/to/file.py
```
#### `search` - Semantic Search
```bash
# Basic search
mcp-vector-search search "function that handles user authentication"
# Adjust similarity threshold
mcp-vector-search search "database queries" --threshold 0.7
# Limit results
mcp-vector-search search "error handling" --limit 10
# Search in specific context
mcp-vector-search search similar "path/to/function.py:25"
```
#### `auto-index` - Automatic Reindexing
```bash
# Setup all auto-indexing strategies
mcp-vector-search auto-index setup --method all
# Setup specific strategies
mcp-vector-search auto-index setup --method git-hooks
mcp-vector-search auto-index setup --method scheduled --interval 60
# Check for stale files and auto-reindex
mcp-vector-search auto-index check --auto-reindex --max-files 10
# View auto-indexing status
mcp-vector-search auto-index status
# Remove auto-indexing setup
mcp-vector-search auto-index teardown --method all
```
#### `watch` - File Watching
```bash
# Start watching for changes
mcp-vector-search watch
# Check watch status
mcp-vector-search watch status
# Enable/disable watching
mcp-vector-search watch enable
mcp-vector-search watch disable
```
#### `status` - Project Information
```bash
# Basic status
mcp-vector-search status
# Detailed information
mcp-vector-search status --verbose
```
#### `config` - Configuration Management
```bash
# View configuration
mcp-vector-search config show
# Update settings
mcp-vector-search config set similarity_threshold 0.8
mcp-vector-search config set embedding_model microsoft/codebert-base
# Configure indexing behavior
mcp-vector-search config set skip_dotfiles true # Skip dotfiles (default)
mcp-vector-search config set respect_gitignore true # Respect .gitignore (default)
# Get specific setting
mcp-vector-search config get skip_dotfiles
mcp-vector-search config get respect_gitignore
# List available models
mcp-vector-search config models
# List all configuration keys
mcp-vector-search config list-keys
```
#### `index-code` - Code-Specific Embeddings
```bash
# Index with CodeT5+ embeddings (code-optimized)
mcp-vector-search index-code
# Feature-flagged via environment variable
export MCP_CODE_ENRICHMENT=true
mcp-vector-search index-code
```
#### `visualize` - Interactive D3.js Visualization
```bash
# Launch visualization server
mcp-vector-search visualize
# Start on custom port
mcp-vector-search visualize --port 8080
# Available views:
# - Treemap: Hierarchical view with size/complexity encoding
# - Sunburst: Radial hierarchical view
# - Force Graph: Network visualization of code relationships
# - Knowledge Graph: Entity and relationship visualization
# - Heatmap: Complexity and quality heatmap
```
#### `story` - Development Narrative Generation
```bash
# Generate development narrative from git history
mcp-vector-search story
# Output formats
mcp-vector-search story --format markdown
mcp-vector-search story --format json
mcp-vector-search story --format html
# Serve as HTTP endpoint
mcp-vector-search story --serve
# Extract-only mode (no LLM)
mcp-vector-search story --no-llm
# Custom LLM model
mcp-vector-search story --model gpt-4o
```
#### `kg` - Knowledge Graph Operations
```bash
# Build knowledge graph
mcp-vector-search kg build
# Check knowledge graph status
mcp-vector-search kg status
# Query knowledge graph
mcp-vector-search kg query "find all Python functions"
mcp-vector-search kg query "show classes in module auth"
# Knowledge graph entities:
# - CodeFile, Function, Class, Person
# - ProgrammingLanguage, ProgrammingFramework
```
#### `chat` - LLM-Powered Code Q&A
```bash
# Ask questions about your codebase
mcp-vector-search chat "explain the authentication flow"
mcp-vector-search chat "how does error handling work?"
# Iterative refinement (up to 30 queries)
# Automatically uses deep search and KG query tools
# Advanced reasoning mode
mcp-vector-search chat "architectural patterns" --think
# Filter by files
mcp-vector-search chat "validation logic" --files "src/*.py"
```
#### `analyze` - Code Analysis
```bash
# Complexity analysis
mcp-vector-search analyze complexity
# Dead code detection
mcp-vector-search analyze dead-code
# Output formats
mcp-vector-search analyze complexity --json
mcp-vector-search analyze complexity --sarif
mcp-vector-search analyze complexity --output-format markdown
# CI/CD integration
mcp-vector-search analyze complexity --fail-on-smell
```
## 🚀 Performance Features
### LanceDB Backend (Default in v2.1+)
**LanceDB is now the default vector database** for better performance and stability:
- **Serverless Architecture**: No separate server process needed
- **Better Scaling**: Superior performance for large codebases (>100k chunks)
- **File-Based Storage**: Simple directory-based persistence
- **Fewer Corruption Issues**: More stable than ChromaDB's HNSW indices
- **Write Buffering**: 2-4x faster indexing with accumulated batch writes
**To use ChromaDB** (legacy), set environment variable:
```bash
export MCP_VECTOR_SEARCH_BACKEND=chromadb
```
**Migrate existing ChromaDB database**:
```bash
mcp-vector-search migrate db chromadb-to-lancedb
```
See [docs/LANCEDB_BACKEND.md](docs/LANCEDB_BACKEND.md) for detailed documentation.
### Apple Silicon M4 Max Optimizations
**2-4x speedup on Apple Silicon** with automatic hardware detection:
- **MPS Backend**: Metal Performance Shaders GPU acceleration for embeddings
- **Intelligent Batch Sizing**: Auto-detects GPU memory (384-512 for M4 Max with 128GB RAM)
- **Multi-Core Optimization**: Utilizes all 12 performance cores efficiently
- **Zero Configuration**: Automatically enabled on Apple Silicon Macs
Environment variables for tuning:
```bash
export MCP_VECTOR_SEARCH_MPS_BATCH_SIZE=512 # Override MPS batch size
export MCP_VECTOR_SEARCH_BATCH_SIZE=128 # Override all backends
```
### Semi-Automatic Reindexing
Multiple strategies to keep your index up-to-date without daemon processes:
1. **Search-Triggered**: Automatically checks for stale files during searches
2. **Git Hooks**: Triggers reindexing after commits, merges, checkouts
3. **Scheduled Tasks**: System-level cron jobs or Windows tasks
4. **Manual Checks**: On-demand via CLI commands
5. **Periodic Checker**: In-process periodic checks for long-running apps
```bash
# Setup all strategies
mcp-vector-search auto-index setup --method all
# Check status
mcp-vector-search auto-index status
```
### Configuration
Projects are configured via `.mcp-vector-search/config.json`:
```json
{
"project_root": "/path/to/project",
"file_extensions": [".py", ".js", ".ts"],
"embedding_model": "sentence-transformers/all-MiniLM-L6-v2",
"similarity_threshold": 0.75,
"languages": ["python", "javascript", "typescript"],
"watch_files": true,
"cache_embeddings": true,
"skip_dotfiles": true,
"respect_gitignore": true
}
```
#### Indexing Configuration Options
**`skip_dotfiles`** (default: `true`)
- Controls whether files and directories starting with "." are skipped during indexing
- **Whitelisted directories** are always indexed regardless of this setting:
- `.github/` - GitHub workflows and actions
- `.gitlab-ci/` - GitLab CI configuration
- `.circleci/` - CircleCI configuration
- When `false`: All dotfiles are indexed (subject to gitignore rules if `respect_gitignore` is `true`)
**`respect_gitignore`** (default: `true`)
- Controls whether `.gitignore` patterns are respected during indexing
- When `false`: Files in `.gitignore` are indexed (subject to `skip_dotfiles` if enabled)
**`force_include_patterns`** (default: `[]`)
- Glob patterns to force-include files/directories even if they are gitignored
- Patterns support `**` for recursive matching (e.g., `repos/**/*.java` matches all Java files in `repos/` and subdirectories)
- Force-include patterns override `.gitignore` rules, allowing selective indexing of gitignored directories
- Example use case: Index specific file types in a gitignored `repos/` directory
**Example: Force-include Java files from gitignored directory**
```bash
# Set force_include_patterns via JSON list
mcp-vector-search config set force_include_patterns '["repos/**/*.java", "repos/**/*.kt"]'
# Or add patterns one at a time (requires custom CLI command)
# This allows .gitignore to exclude repos/ from git, but mcp-vector-search still indexes Java/Kotlin files
```
**Example config.json with force_include_patterns:**
```json
{
"respect_gitignore": true,
"force_include_patterns": [
"repos/**/*.java",
"repos/**/*.kt",
"vendor/internal/**/*.go"
]
}
```
#### Configuration Use Cases
**Default Behavior** (Recommended for most projects):
```bash
# Skip dotfiles AND respect .gitignore
mcp-vector-search config set skip_dotfiles true
mcp-vector-search config set respect_gitignore true
```
**Index Everything** (Useful for deep code analysis):
```bash
# Index all files including dotfiles and gitignored files
mcp-vector-search config set skip_dotfiles false
mcp-vector-search config set respect_gitignore false
```
**Index Dotfiles but Respect .gitignore**:
```bash
# Index configuration files but skip build artifacts
mcp-vector-search config set skip_dotfiles false
mcp-vector-search config set respect_gitignore true
```
**Skip Dotfiles but Ignore .gitignore**:
```bash
# Useful when you want to index files in .gitignore but skip hidden config files
mcp-vector-search config set skip_dotfiles true
mcp-vector-search config set respect_gitignore false
```
**Selective Gitignore Override with Force-Include Patterns**:
```bash
# Index specific file types from gitignored directories
# Example: .gitignore excludes repos/, but you want to index Java/Kotlin files
mcp-vector-search config set respect_gitignore true
mcp-vector-search config set force_include_patterns '["repos/**/*.java", "repos/**/*.kt"]'
# This allows:
# - .gitignore to exclude repos/ from git (keeps your repo clean)
# - mcp-vector-search to index Java/Kotlin files in repos/ (semantic search)
# - Other files in repos/ (e.g., .class, .jar) remain excluded
```
## 🏗️ Architecture
### Core Components
- **Parser Registry**: Extensible system for language-specific parsing
- **Semantic Indexer**: Efficient code chunking and embedding generation
- **Vector Database**: LanceDB for similarity search
- **File Watcher**: Real-time monitoring and incremental updates
- **CLI Interface**: Rich, user-friendly command-line experience
### Supported Languages
MCP Vector Search supports **13 programming languages** with full semantic search capabilities:
| Language | Extensions | Status | Features |
|------------|------------|--------|----------|
| Python | `.py`, `.pyw` | ✅ Full | Functions, classes, methods, docstrings |
| JavaScript | `.js`, `.jsx`, `.mjs` | ✅ Full | Functions, classes, JSDoc, ES6+ syntax |
| TypeScript | `.ts`, `.tsx` | ✅ Full | Interfaces, types, generics, decorators |
| C# | `.cs` | ✅ Full | Classes, interfaces, structs, enums, methods, XML docs, attributes |
| Dart | `.dart` | ✅ Full | Functions, classes, widgets, async, dartdoc |
| PHP | `.php`, `.phtml` | ✅ Full | Classes, methods, traits, PHPDoc, Laravel patterns |
| Ruby | `.rb`, `.rake`, `.gemspec` | ✅ Full | Modules, classes, methods, RDoc, Rails patterns |
| Java | `.java` | ✅ Full | Classes, methods, annotations, interfaces |
| Go | `.go` | ✅ Full | Functions, structs, interfaces, packages |
| Rust | `.rs` | ✅ Full | Functions, structs, traits, implementations |
| HTML | `.html`, `.htm` | ✅ Full | Semantic content extraction, heading hierarchy, text chunking |
| Text/Markdown | `.txt`, `.md`, `.markdown` | ✅ Basic | Semantic chunking for documentation |
#### New Language Support
**HTML Support** (Unreleased):
- **Semantic Extraction**: Content from h1-h6, p, section, article, main, aside, nav, header, footer
- **Intelligent Chunking**: Based on heading hierarchy (h1-h6)
- **Context Preservation**: Maintains class and id attributes for searchability
- **Script/Style Filtering**: Ignores non-content elements
- **Use Cases**: Static sites, documentation, web templates, HTML fragments
**Dart/Flutter Support** (v0.4.15):
- **Widget Detection**: StatelessWidget, StatefulWidget recognition
- **State Classes**: Automatic parsing of `_WidgetNameState` patterns
- **Async Support**: Future<T> and async function handling
- **Dartdoc**: Triple-slash comment extraction
- **Tree-sitter AST**: Fast, accurate parsing with regex fallback
**PHP Support** (v0.5.0):
- **Class Detection**: Classes, interfaces, traits
- **Method Extraction**: Public, private, protected, static methods
- **Magic Methods**: __construct, __get, __set, __call, etc.
- **PHPDoc**: Full comment extraction
- **Laravel Patterns**: Controllers, Models, Eloquent support
- **Tree-sitter AST**: Fast parsing with regex fallback
**Ruby Support** (v0.5.0):
- **Module/Class Detection**: Full namespace support (::)
- **Method Extraction**: Instance and class methods
- **Special Syntax**: Method names with ?, ! support
- **Attribute Macros**: attr_accessor, attr_reader, attr_writer
- **RDoc**: Comment extraction (# and =begin...=end)
- **Rails Patterns**: ActiveRecord, Controllers support
- **Tree-sitter AST**: Fast parsing with regex fallback
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
### Development Setup
```bash
# Clone the repository
git clone https://github.com/bobmatnyc/mcp-vector-search.git
cd mcp-vector-search
# Install development environment (includes dependencies + editable install)
make dev
# Test CLI from source (recommended during development)
./dev-mcp version # Shows [DEV] indicator
./dev-mcp search "test" # No reinstall needed after code changes
# Run tests and quality checks
make test-unit # Run unit tests
make quality # Run linting and type checking
make fix # Auto-fix formatting issues
# View all available targets
make help
```
For detailed development workflow and `dev-mcp` usage, see the [Development](#-development) section below.
### Adding Language Support
1. Create a new parser in `src/mcp_vector_search/parsers/`
2. Extend the `BaseParser` class
3. Register the parser in `parsers/registry.py`
4. Add tests and documentation
## 📊 Performance
- **Indexing Speed**: ~1000 files/minute (typical Python project)
- **Search Latency**: <100ms for most queries
- **Memory Usage**: ~50MB baseline + ~1MB per 1000 code chunks
- **Storage**: ~1KB per code chunk (compressed embeddings)
## ⚠️ Known Limitations (Alpha)
- **Tree-sitter Integration**: Currently using regex fallback parsing (Tree-sitter setup needs improvement)
- **Search Relevance**: Embedding model may need tuning for code-specific queries
- **Error Handling**: Some edge cases may not be gracefully handled
- **Documentation**: API documentation is minimal
- **Testing**: Limited test coverage, needs real-world validation
## 🙏 Feedback Needed
We're actively seeking feedback on:
- **Search Quality**: How relevant are the search results for your codebase?
- **Performance**: How does indexing and search speed feel in practice?
- **Usability**: Is the CLI interface intuitive and helpful?
- **Language Support**: Which languages would you like to see added next?
- **Features**: What functionality is missing for your workflow?
Please [open an issue](https://github.com/bobmatnyc/mcp-vector-search/issues) or start a [discussion](https://github.com/bobmatnyc/mcp-vector-search/discussions) to share your experience!
## 🔮 Roadmap
### v2.5: Production (Current) ✅
- [x] Core CLI interface
- [x] Multi-language parsing (13 languages: Python, JavaScript, TypeScript, C#, Dart, PHP, Ruby, Java, Go, Rust, HTML, Markdown, Text)
- [x] LanceDB default backend (ChromaDB legacy support)
- [x] Apple Silicon optimizations (2-4x speedup with MPS)
- [x] File watching and auto-reindexing
- [x] MCP server implementation with 17 tools
- [x] Advanced search modes (semantic, contextual, similar code)
- [x] Code analysis tools (complexity, dead code detection, code smells)
- [x] Interactive D3.js visualization (5+ views: Treemap, Sunburst, Force Graph, KG, Heatmap)
- [x] Knowledge Graph with KuzuDB (entity extraction, relationship mapping)
- [x] Development narrative generation (`story` command)
- [x] Chat mode with LLM integration (iterative refinement, up to 30 queries)
- [x] CodeT5+ code-specific embeddings
- [x] Pipeline parallelism (37% faster indexing)
- [x] Production-ready performance (write buffering, GPU acceleration, async pipeline)
### v2.6+: Enhancements 🔮
- [ ] Hybrid search (vector + keyword + BM25)
- [ ] Additional language support (more languages beyond 13)
- [ ] IDE extensions (VS Code, JetBrains)
- [ ] Team collaboration features
- [ ] Advanced code refactoring suggestions
- [ ] Real-time collaboration on knowledge graph
- [ ] Multi-project knowledge graph federation
## 🛠️ Development
### Three-Stage Development Workflow
**Stage A: Local Development & Testing**
```bash
# Setup development environment
make dev
# Run development tests
make test-unit
# Run CLI from source (recommended during development)
./dev-mcp version # Visual [DEV] indicator
./dev-mcp status # Any command works
./dev-mcp search "auth" # Immediate feedback on changes
# Run quality checks
make quality
# Alternative: use uv run directly
uv run mcp-vector-search version
```
#### Using the `dev-mcp` Development Helper
The `./dev-mcp` script provides a streamlined way to run the CLI from source code during development, eliminating the need for repeated installations.
**Key Features:**
- **Visual [DEV] Indicator**: Shows `[DEV]` prefix to distinguish from installed version
- **No Reinstall Required**: Reflects code changes immediately
- **Complete Argument Forwarding**: Works with all CLI commands and options
- **Verbose Mode**: Debug output with `--verbose` flag
- **Built-in Help**: Script usage with `--help`
**Usage Examples:**
```bash
# Basic commands (note the [DEV] prefix in output)
./dev-mcp version
./dev-mcp status
./dev-mcp index
./dev-mcp search "authentication logic"
# With CLI options
./dev-mcp search "error handling" --limit 10
./dev-mcp index --force
# Script verbose mode (shows Python interpreter, paths)
./dev-mcp --verbose search "database"
# Script help (shows dev-mcp usage, not CLI help)
./dev-mcp --help
# CLI command help (forwards --help to the CLI)
./dev-mcp search --help
./dev-mcp index --help
```
**When to Use:**
- **`./dev-mcp`** → Development workflow (runs from source code)
- **`mcp-vector-search`** → Production usage (runs installed version via pipx/pip)
**Benefits:**
- **Instant Feedback**: Changes to source code are reflected immediately
- **No Build Step**: Skip the reinstall cycle during active development
- **Clear Context**: Visual `[DEV]` indicator prevents confusion about which version is running
- **Error Handling**: Built-in checks for uv installation and project structure
**Requirements:**
- Must have `uv` installed (`pip install uv`)
- Must run from project root directory
- Requires `pyproject.toml` in current directory
**Stage B: Local Deployment Testing**
```bash
# Build and test clean deployment
./scripts/deploy-test.sh
# Test on other projects
cd ~/other-project
mcp-vector-search init && mcp-vector-search index
```
**Stage C: PyPI Publication**
```bash
# Publish to PyPI
./scripts/publish.sh
# Verify published version
pip install mcp-vector-search --upgrade
```
### Quick Reference
```bash
./scripts/workflow.sh # Show workflow overview
```
See [DEVELOPMENT.md](DEVELOPMENT.md) for detailed development instructions.
## 📚 Documentation
For comprehensive documentation, see **[docs/index.md](docs/index.md)** - the complete documentation hub.
### Getting Started
- **[Installation Guide](docs/getting-started/installation.md)** - Complete installation instructions
- **[First Steps](docs/getting-started/first-steps.md)** - Quick start tutorial
- **[Configuration](docs/getting-started/configuration.md)** - Basic configuration
### User Guides
- **[Searching Guide](docs/guides/searching.md)** - Master semantic code search
- **[Indexing Guide](docs/guides/indexing.md)** - Indexing strategies and optimization
- **[CLI Usage](docs/guides/cli-usage.md)** - Advanced CLI features
- **[MCP Integration](docs/guides/mcp-integration.md)** - AI tool integration
- **[File Watching](docs/guides/file-watching.md)** - Real-time index updates
### Reference
- **[CLI Commands](docs/reference/cli-commands.md)** - Complete command reference
- **[Configuration Options](docs/reference/configuration-options.md)** - All configuration settings
- **[Features](docs/reference/features.md)** - Feature overview
- **[Architecture](docs/reference/architecture.md)** - System architecture
### Development
- **[Contributing](docs/development/contributing.md)** - How to contribute
- **[Testing](docs/development/testing.md)** - Testing guide
- **[Code Quality](docs/development/code-quality.md)** - Linting and formatting
- **[API Reference](docs/development/api.md)** - Internal API docs
- **[Deployment](docs/deployment/README.md)** - Release and deployment guide
### Advanced
- **[Troubleshooting](docs/advanced/troubleshooting.md)** - Common issues and solutions
- **[Performance](docs/architecture/performance.md)** - Performance optimization
- **[Extending](docs/advanced/extending.md)** - Adding new features
## 🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
## 📄 License
Elastic License 2.0 - see [LICENSE](LICENSE) file for details.
**Note**: This software may not be provided to third parties as a hosted or managed service.
## 🙏 Acknowledgments
- [LanceDB](https://lancedb.com/) for vector database
- [Tree-sitter](https://tree-sitter.github.io/) for parsing infrastructure
- [Sentence Transformers](https://www.sbert.net/) for embeddings
- [Typer](https://typer.tiangolo.com/) for CLI framework
- [Rich](https://rich.readthedocs.io/) for beautiful terminal output
---
**Built with ❤️ for developers who love efficient code search**
| text/markdown | null | Robert Matsuoka <bob@matsuoka.com> | null | null | Elastic License 2.0 Copyright (c) 2024-2025 Robert Matsuoka Contact: bob@matsuoka.com ## Acceptance By using the software, you agree to all of the terms and conditions below. ## Copyright License The licensor grants you a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable license to use, copy, distribute, make available, and prepare derivative works of the software, in each case subject to the limitations and conditions below. ## Limitations You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software. You may not move, change, disable, or circumvent the license key functionality in the software, and you may not remove or obscure any functionality in the software that is protected by the license key. You may not alter, remove, or obscure any licensing, copyright, or other notices of the licensor in the software. Any use of the licensor's trademarks is subject to applicable law. ## Patents The licensor grants you a license, under any patent claims the licensor can license, or becomes able to license, to make, have made, use, sell, offer for sale, import and have imported the software, in each case subject to the limitations and conditions in this license. This license does not cover any patent claims that you cause to be infringed by modifications or additions to the software. If you or your company make any written claim that the software infringes or contributes to infringement of any patent, your patent license for the software granted under these terms ends immediately. If your company makes such a claim, your patent license ends immediately for work on behalf of your company. ## Notices You must ensure that anyone who gets a copy of any part of the software from you also gets a copy of these terms. If you modify the software, you must include in any modified copies of the software prominent notices stating that you have modified the software. ## No Other Rights These terms do not imply any licenses other than those expressly granted in these terms. ## Termination If you use the software in violation of these terms, such use is not licensed, and your licenses will automatically terminate. If the licensor provides you with a notice of your violation, and you cease all violation of this license no later than 30 days after you receive that notice, your licenses will be reinstated retroactively. However, if you violate these terms after such reinstatement, any additional violation of these terms will cause your licenses to terminate automatically and permanently. ## No Liability *As far as the law allows, the software comes as is, without any warranty or condition, and the licensor will not be liable to you for any damages arising out of these terms or the use or nature of the software, under any kind of legal claim.* ## Definitions The **licensor** is the entity offering these terms, and the **software** is the software the licensor makes available under these terms, including any portion of it. **you** refers to the individual or entity agreeing to these terms. **your company** is any legal entity, sole proprietorship, or other kind of organization that you work for, plus all organizations that have control over, are under the control of, or are under common control with that organization. **control** means ownership of substantially all the assets of an entity, or the power to direct its management and policies by vote, contract, or otherwise. Control can be direct or indirect. **your licenses** are all the licenses granted to you for the software under these terms. **use** means anything you do with the software requiring one of your licenses. **trademark** means trademarks, service marks, and similar rights. | code-graph, code-search, d3js, force-layout, interactive-graph, mcp, semantic-search, vector-database, visualization | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: Other/Proprietary License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering :: Visualization",
"Topic :: Software Development :: Code Generators",
"Top... | [] | null | null | >=3.11 | [] | [] | [] | [
"aiofiles>=23.0.0",
"authlib>=1.6.4",
"boto3>=1.35.0",
"click-didyoumean>=0.3.0",
"fastapi>=0.104.0",
"httpx>=0.25.0",
"kuzu>=0.7.0",
"lancedb>=0.6.0",
"loguru>=0.7.0",
"mcp>=1.12.4",
"orjson>=3.9.0",
"packaging>=23.0",
"pandas>=2.0.0",
"psutil>=5.9.0",
"py-mcp-installer>=0.1.4",
"pyda... | [] | [] | [] | [
"Homepage, https://github.com/bobmatnyc/mcp-vector-search",
"Documentation, https://mcp-vector-search.readthedocs.io",
"Repository, https://github.com/bobmatnyc/mcp-vector-search",
"Bug Tracker, https://github.com/bobmatnyc/mcp-vector-search/issues"
] | uv/0.9.18 {"installer":{"name":"uv","version":"0.9.18","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T05:33:48.254664 | mcp_vector_search-2.6.0.tar.gz | 2,925,231 | f2/f4/d35b3be9efd61c65ea1f877835e2dcdc83d106fd87453eb30d662b1b7714/mcp_vector_search-2.6.0.tar.gz | source | sdist | null | false | c61dd0a7df9d01f4fcd6143c17311964 | eb9e0646ac8450104b78af8e094400e30262f3af56e797fa22546f61a1e7f06d | f2f4d35b3be9efd61c65ea1f877835e2dcdc83d106fd87453eb30d662b1b7714 | null | [
"LICENSE"
] | 227 |
2.4 | hbrowser | 0.12.10 | A tool for browsing tasks on e-h/exh-websites. | # HBrowser (hbrowser)
## Setup
### Environment Variables
HBrowser requires the following environment variables:
- `APIKEY_2CAPTCHA`: Your 2Captcha API key for solving CAPTCHA challenges
- `HBROWSER_LOG_LEVEL` (optional): Control logging verbosity (DEBUG, INFO, WARNING, ERROR). Default: INFO
Set the environment variables before running the script:
**Bash/Zsh:**
```bash
export APIKEY_2CAPTCHA=your_api_key_here
export HBROWSER_LOG_LEVEL=INFO # Optional: DEBUG, INFO, WARNING, ERROR
```
**Fish:**
```fish
set -x APIKEY_2CAPTCHA your_api_key_here
set -x HBROWSER_LOG_LEVEL INFO # Optional
```
**Windows Command Prompt:**
```cmd
set APIKEY_2CAPTCHA=your_api_key_here
set HBROWSER_LOG_LEVEL=INFO
```
**Windows PowerShell:**
```powershell
$env:APIKEY_2CAPTCHA="your_api_key_here"
$env:HBROWSER_LOG_LEVEL="INFO"
```
HBrowser uses [2Captcha](https://2captcha.com/) service to automatically solve Cloudflare Turnstile and managed challenges that may appear during login. You need to register for a 2Captcha account and obtain an API key.
## Logging
HBrowser uses Python's built-in `logging` module. You can control the log level using the `HBROWSER_LOG_LEVEL` environment variable:
- **DEBUG**: Detailed information for diagnosing problems (most verbose)
- **INFO**: Confirmation that things are working as expected (default)
- **WARNING**: Something unexpected happened, but the software is still working
- **ERROR**: A serious problem that prevented a function from executing
Example:
```bash
# Set log level to DEBUG for detailed output
export HBROWSER_LOG_LEVEL=DEBUG
python your_script.py
# Set log level to WARNING to see only warnings and errors
export HBROWSER_LOG_LEVEL=WARNING
python your_script.py
```
## Usage
Here's a quick example of how to use HBrowser:
```python
from hbrowser import EHDriver
if __name__ == "__main__":
with EHDriver() as driver:
driver.punchin()
```
Here's a quick example of how to use HVBrowser:
```python
from hvbrowser import HVDriver
if __name__ == "__main__":
with HVDriver() as driver:
driver.monstercheck()
```
| text/markdown | Kuan-Lun Wang | null | null | null | GNU Affero General Public License v3 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"webdriver-manager>=4.0.2",
"fake-useragent>=2.2.0",
"h2h-galleryinfo-parser>=0.2.2",
"selenium>=4.40.0",
"beautifulsoup4>=4.14.3",
"hv-bie>=0.3.7",
"numpy>=2.2.6",
"opencv-python>=4.12.0.88",
"onnxruntime>=1.15.0",
"2captcha-python>=2.0.2",
"undetected-chromedriver>=3.5.5",
"setuptools>=82.0.... | [] | [] | [] | [
"Homepage, https://github.com/Kuan-Lun/hbrowser",
"Tracker, https://github.com/Kuan-Lun/hbrowser/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T05:33:38.706204 | hbrowser-0.12.10.tar.gz | 5,755,974 | b4/bb/76073814856a64ba6d74e1a0a08bd9196e53b2ba58957a5d74c19e0d1368/hbrowser-0.12.10.tar.gz | source | sdist | null | false | 4274e5b484606c0140536699b93533af | 341dfc9efafa25c6750b99840c2c2afa01cb623660ef6b8450daf722a520740b | b4bb76073814856a64ba6d74e1a0a08bd9196e53b2ba58957a5d74c19e0d1368 | null | [
"LICENSE"
] | 216 |
2.4 | daffy | 2.7.0 | Function decorators for DataFrame validation - columns, data types, and row-level validation with Pydantic. Supports Pandas, Polars, Modin, and PyArrow. | # Daffy — Validate pandas & Polars DataFrames with Python Decorators
[](https://pypi.org/project/daffy/)
[](https://anaconda.org/conda-forge/daffy)
[](https://pypi.org/project/daffy/)
[](https://daffy.readthedocs.io)
[](https://github.com/vertti/daffy/actions)
[](https://codecov.io/gh/vertti/daffy)
**Validate your pandas and Polars DataFrames at runtime with simple Python decorators.** Daffy catches missing columns, wrong data types, and invalid values before they cause downstream errors in your data pipeline.
Also supports Modin and PyArrow DataFrames.
- ✅ **Column & dtype validation** — lightweight, minimal overhead
- ✅ **Value constraints** — nullability, uniqueness, range checks
- ✅ **Row validation with Pydantic** — when you need deeper checks
- ✅ **Works with pandas, Polars, Modin, PyArrow** — no lock-in
---
## Installation
```bash
pip install daffy
```
or with conda:
```bash
conda install -c conda-forge daffy
```
Works with whatever DataFrame library you already have installed. Python 3.10–3.14.
---
## Quickstart
```python
from daffy import df_in, df_out
@df_in(["price", "bedrooms", "location"])
@df_out(["price_per_room", "price_category"])
def analyze_housing(houses_df):
# Transform raw housing data into price analysis
return analyzed_df
```
If a column is missing, has wrong dtype, or violates a constraint — **Daffy fails fast** with a clear error message at the function boundary.
---
## Why Daffy?
Most DataFrame validation tools are schema-first (define schemas separately) or pipeline-wide (run suites over datasets). **Daffy is decorator-first:** validate inputs and outputs where transformations happen.
| | |
| ------------------------ | -------------------------------------------------------------------------------- |
| **Non-intrusive** | Just add decorators — no refactoring, no custom DataFrame types, no schema files |
| **Easy to adopt** | Add in 30 seconds, remove just as fast if needed |
| **In-process** | No external stores, orchestrators, or infrastructure |
| **Pay for what you use** | Column validation is essentially free; opt into row validation when needed |
---
## Examples
### Column validation
```python
from daffy import df_in, df_out
@df_in(["Brand", "Price"])
@df_out(["Brand", "Price", "Discount"])
def apply_discount(df):
df = df.copy()
df["Discount"] = df["Price"] * 0.1
return df
```
### Regex column matching
Match dynamic column names with regex patterns:
```python
@df_in(["id", "r/feature_\\d+/"])
def process_features(df):
return df
```
### Value constraints
Vectorized checks with zero row iteration overhead:
```python
@df_in({
"price": {"checks": {"gt": 0, "lt": 10000}},
"status": {"checks": {"isin": ["active", "pending", "closed"]}},
"email": {"checks": {"str_regex": r"^[^@]+@[^@]+\.[^@]+$"}},
})
def process_orders(df):
return df
```
Available checks: `gt`, `ge`, `lt`, `le`, `between`, `eq`, `ne`, `isin`, `notnull`, `str_regex`
Also supported: `notin`, `str_startswith`, `str_endswith`, `str_contains`, `str_length`
### Nullability and uniqueness
```python
@df_in({
"user_id": {"unique": True, "nullable": False}, # user_id must be unique and not null
"email": {"nullable": False}, # email cannot be null
"age": {"dtype": "int64"},
})
def clean_users(df):
return df
```
### Row validation with Pydantic
For complex, cross-field validation:
```bash
pip install 'daffy[pydantic]'
```
```python
from pydantic import BaseModel, Field
from daffy import df_in
class Product(BaseModel):
name: str
price: float = Field(gt=0)
stock: int = Field(ge=0)
@df_in(row_validator=Product)
def process_inventory(df):
return df
```
---
## Daffy vs Alternatives
| Use Case | Daffy | Pandera | Great Expectations |
| ---------------------------- | :-----------------: | :----------------: | :-----------------: |
| Function boundary guardrails | ✅ Primary focus | ⚠️ Possible | ❌ Not designed for |
| Quick column/type checks | ✅ Lightweight | ⚠️ Requires schemas | ⚠️ Requires setup |
| Complex statistical checks | ⚠️ Limited | ✅ Extensive | ✅ Extensive |
| Pipeline/warehouse QA | ❌ Not designed for | ⚠️ Some support | ✅ Primary focus |
| Multi-backend support | ✅ | ⚠️ Varies | ✅ |
---
## Configuration
Configure Daffy project-wide via `pyproject.toml`:
```toml
[tool.daffy]
strict = true
```
---
## Documentation
Full documentation available at **[daffy.readthedocs.io](https://daffy.readthedocs.io)**
- [Getting Started](https://daffy.readthedocs.io/getting-started/) — quick introduction
- [Usage Guide](https://daffy.readthedocs.io/usage/) — comprehensive reference
- [API Reference](https://daffy.readthedocs.io/api/) — decorator signatures
- [Changelog](https://github.com/vertti/daffy/blob/master/CHANGELOG.md) — version history
---
## Contributing
Issues and pull requests welcome on [GitHub](https://github.com/vertti/daffy).
## License
MIT
| text/markdown | null | Janne Sinivirta <janne.sinivirta@gmail.com> | null | null | null | column-validation, data-pipeline, data-quality, data-validation, dataframe, dataframe-schema, dataframe-validator, decorator, modin, narwhals, pandas, pandas-validation, polars, polars-validation, pyarrow, pydantic, row-validation, runtime-validation, schema, typing, validate-dataframe, validation | [
"Development Status :: 5 - Production/Stable",
"Framework :: Pydantic",
"Framework :: Pydantic :: 2",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"P... | [] | null | null | >=3.10 | [] | [] | [] | [
"narwhals>=2.14.0",
"tomli>=2.0.0",
"pydantic>=2.4.0; extra == \"pydantic\""
] | [] | [] | [] | [
"homepage, https://github.com/vertti/daffy",
"repository, https://github.com/vertti/daffy",
"documentation, https://github.com/vertti/daffy/blob/master/README.md",
"changelog, https://github.com/vertti/daffy/blob/master/CHANGELOG.md",
"issues, https://github.com/vertti/daffy/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:32:56.830441 | daffy-2.7.0.tar.gz | 57,130 | 55/12/2f470cfe2fe75e960c42c6628dd6a4257c6b8afef914c70ff4427996eb71/daffy-2.7.0.tar.gz | source | sdist | null | false | 09dfc6a3a26a790198c0e6a504402f37 | 17e2f52a34b1f845181767b5a6d2731851531892dc18892a709b165a6f01fca2 | 55122f470cfe2fe75e960c42c6628dd6a4257c6b8afef914c70ff4427996eb71 | MIT | [
"LICENSE"
] | 224 |
2.4 | pmu-tools | 2026.2.21.0 | pmu tools is a collection of tools and libraries for profile collection and performance analysis on Intel CPUs on top of Linux perf. This uses performance counters in the CPU. | 



pmu tools is a collection of tools and libraries for profile collection and performance
analysis on Intel CPUs on top of [Linux perf](https://perf.wiki.kernel.org/index.php/Main_Page).
This uses performance counters in the CPU.
# Quick (non-) installation
pmu-tools doesn't really need to be installed. It's enough to clone the repository
and run the respective tool (like toplev or ocperf) out of the source directory.
To run it from other directories you can use
export PATH=$PATH:/path/to/pmu-tools
or symlink the tool you're interested in to /usr/local/bin or ~/bin. The tools automatically
find their python dependencies.
When first run, toplev / ocperf will automatically download the Intel event lists from
https://github.com/intel/perfmon. This requires working internet access. Later runs can
be done offline. It's also possible to download the event lists ahead, see
[pmu-tools offline](https://github.com/andikleen/pmu-tools/wiki/Running-ocperf-toplev-when-not-on-the-internet)
toplev works with both python 2.7 and python 3. However it requires an not
too old perf tools and depending on the CPU an uptodate kernel. For more details
see [toplev kernel support](https://github.com/andikleen/pmu-tools/wiki/toplev-kernel-support)
The majority of the tools also don't require any python dependencies and run
in "included batteries only" mode. The main exception is generating plots or XLSX
spreadsheets, which require external libraries.
If you want to use those run
pip install -r requirements.txt
once, or follow the command suggested in error messages.
jevents is a C library. It has no dependencies other than gcc/make and can be built with
cd jevents
make
# Quick examples
toplev -l2 program
measure whole system in level 2 while program is running
toplev -l1 --single-thread program
measure single threaded program. On hyper threaded systems with
Skylake or older the system should be idle.
toplev -NB program
Measure program showing consolidated bottleneck view and extra
information associated with bottlenecks. Note this will multiplex
performance counters, so there may be measuring errors.
toplev -NB --run-sample program
Measure programing showing bottlenecks and extra nodes, and
automatically sample for the location of bottlenecks in a second
pass.
toplev --drilldown --only-bottleneck program
Rerun workload with minimal multiplexing until critical bottleneck
is found. Only print critical bottleneck
toplev -l3 --no-desc -I 100 -x, sleep X
measure whole system for X seconds every 100ms, outputting in CSV format.
toplev --all --core C0 taskset -c 0,1 program
Measure program running on core 0 with all nodes and metrics enabled.
toplev --all --xlsx x.xlsx -a sleep 10
Generate spreadsheet with full system measurement for 10 seconds
For more details on toplev please see the [toplev tutorial](https://github.com/andikleen/pmu-tools/wiki/toplev-manual)
# What tool to use for what?
You want to:
- understand CPU bottlenecks on the high-level: use toplev.
- display toplev output graphically: toplev --xlsx (or --graph)
- know what CPU events to run, but want to use symbolic names for a new CPU: use ocperf.
- measure interconnect/caches/memory/power management on Xeon E5+: use ucevent (or toplev)
- Use perf events from a C program: use jevents
- Query CPU topology or disable HyperThreading: use cputop
- Change Model Specific Registers: use msr
- Change PCI config space: use pci
For more details on the tools see [TOOLS](TOOLS.md)
# All features:
## Major tools/libraries
* The "ocperf" wrapper to "perf" that provides a full core performance
counter event list for common Intel CPUs. This allows to use all the
Intel events, not just the builtin events of perf. Can be also used
as a library from other python programs
* The "toplev.py" tool to identify the micro-architectural bottleneck for a workload.
This implements the [TopDown](https://sites.google.com/site/analysismethods/yasin-pubs) or [TopDown2](http://software.intel.com/en-us/articles/how-to-tune-applications-using-a-top-down-characterization-of-microarchitectural-issues)
methodology.
* The "ucevent" tool to manage and compute uncore performance events. Uncore is the part of the CPU that is not core. Supports many metrics for power management, IO, QPI (interconnect), caches, and others. ucevent automatically generates event descriptions
for the perf uncore driver and pretty prints the output. It also supports
computing higher level metrics derived from multiple events.
* A library to resolve named intel events (like INST_RETIRED.ANY)
to perf_event_attr ([jevents](http://halobates.de/jevents.html))
and provide higher level function for using the Linux perf API
for self profiling or profiling other programs.
It also has a "perf stat" clone called "jestat"
* A variety of tools for plotting and post processing perf stat -I1000 -x,
or toplev.py -I1000 -x, interval measurements.
* Some utility libraries and functions for MSR access, CPU topology
and other functionality,as well as example programs how to program the Intel PMU.
There are some obsolete tools which are not supported anymore, like simple-pebs.
These are kept as PMU programming reference, but may need some updates to build
on newer Linux kernels.
# Recent new features:
## TMA 5.01 release
* toplev updated to TMA 5.01:
* Bottlenecks View tops the spreadsheet of a highly detailed TMA tree with over 120 nodes; GNR & LNL models
* New Models
* GNR for Granite Rapids - the 6th gen Xeon Scalable server processors
* LNL for P-core in Arrow Lake and Lunar Lake processors - incl support for the 3-level cache
* Bottlenecks View:
* The Bottlenecks View opens the metrics list starting the 5.0 release. It has been placed above the TMA tree as it serves as an abstraction of it.
* Cache_Memory_Latency now accounts for Split_Loads/Stores and Lock_Latency (were under Other_Bottlenecks) [SKL onwards]
* Improved Memory_Cache_Latency accuracy through L1_Latency_Dependency [LNL]
* New Tree Nodes
* 25 new nodes detailing next levels under Branch_Mispredicts, Divider, ICache_Misses, L1/L2 d-cache latencies and STLB Misses
* New Informative Metrics
* Cond_TK_Fwd: Taken conditionals are split into Cond_TK_Fwd and Cond_TK_Bwd in the Info.Branches group [LNL]
* DSB_Switches_Ret, MS_Latency_Ret, Unknown_Branches_Ret in Info.Frontend group [MTL onwards]
* L1DL0_MPKI, L1DL0_Cache_Fill_BW in Info.Memory group [LNL]
* Load_STLB_Miss_Ret, Store_STLB_Miss_Ret in Info.Memory.TLB group [MTL onwards]
* Useless_HWPF in Info.Memory.Prefetches group [ICL onwards]
* Enhancements & fixes
* Fixed Ports_Utilized_0 error in 4.8 [MTL]
* Memory related bottlenecks were miss-calculated [MTL only]
* Memory_Synchronization has a typo in its tag [all]
* toplev updated the newer E-core models to E-core TMA 4.0
* toplev supports the Sierra Forest (Xeon 6) E-core based server CPU.
* toplev supports --host, --guest filters.
* toplev supports generating weak groups, mostly to work around issues during
testing.
## TMA 4.8 release
* toplev updated to TMA 4.8:
* Bottlenecks View:
* Renamed Base_Non_Br to Useful_Work and simplified descriptions for all BV metrics.
* Cache_Memory_Latency now accounts for L1 cache latency as well.
* Improved Branching_Overhead accuracy for function calling and alignments
* Cross-reference Bottlenecks w/ TMA tree for tool visualization (VTune request)
* New Tree Nodes
* L1_Hit_Latency: estimates fraction of cycles with demand load accesses that hit the L1 cache (relies on Dependent_Loads_Weight SystemParameter today)
* New Informative Metrics
* Fetch_LSD (client), Fetch_DSB, Fetch_MITE under Info.Pipeline group [SKL onwards]
* DSB_Bandwidth under Info.Botlnk.L2
* L2MPKI_RFO under Info.Memory
* Key Enhancements & fixes
* Fixed Ports_Utilization/Ports_Utilized_0
* Slightly tuned memory (fixed cost) latencies [SPR, EMR]
* Corrected CPU_Utilization, CPUs_Utilized for Linux perf based tools
* toplev now supports Meteor Lake systems.
* Add a new genretlat.py tool to tune the toplev model for a workload. The basic tuning needs to be
generated before first toplev use using genretlat -o mtl-retlat.json ./workloads/BC1s (or suitable workload). toplev
has a new --ret-latency option to override the tuning file.
## TMA 4.7 release
* toplev updated to TMA 4.7:
* New --hbm-only for sprmax in HBM Only mode. toplev currently cannot auto detect this condition.
* New Models
* SPR-HBM: model for Intel Xeon Max (server) processor covering HBM-only mode (on top of cache mode introduced in 4.6 release)
* New Features
* Releasing the Bottlenecks View - a rather complete version [SKL onwards]
* Bottlenecks View is An abstraction or summarization of the 100+ TMA tree nodes into a 12-entry vector of familiar performance issues, presented under the Info.Bottlenecks section.
* This release introduces Core_Bound_Est metric: An estimation of total pipeline cost when the execution is compute-bound.
* Besides, balanced distrubtion among Branching Retired, Irregular_Overhead, Mispredictions and Instruction_Fetch_BW as well as
* enhanced Cache_Memory_Latency to account for Stores info better accuracy.
* New Tree Metrics (nodes)
* HBM_Bound: stalls due to High Bandwidth Memory (HBM) accesses by loads.
* Informative Metrics
* New: Uncore_Frequency in server models
* New: IpPause [CFL onwards]
* Key Enhancements & fixes
* Hoisted Serializing_Operation and AMX_Busy to level 3; directly under Core Bound [SKL onwards]
* Swapped semantics of ILP (becomes per-thread) and Execute (per physical core) info metrics
* Moved Nop_Instructions to Level 4 under Other_Light_Op [SKL onwards]
* Moved Shuffles_256b to Level 4 under Other_Light_Op [ADL onwards]
* Renamed Local/Remote_DRAM to Local/Remote_MEM to account for HBM too
* Reduced # events when SMT is off [all]
* Reduced # events for HBM metrics; fixed MEM_Bandwidth/Latency descriptions [SPR-HBM]
* Tuned Threshold for: Branching_Overhead; Fetch_Bandwidth, Ports_Utilized_3m
* toplev has new options:
* --node-metrics or -N collects and shows metrics related to selected TMA nodes if their nodes
cross the threshold. With --drilldown it will show only the metrics of the bottleneck.
* --areas can select nodes and metrics by area
* --bottlenecks or -B shows the bottleneck view metrics (equivalent to --areas Info.Bottleneck)
* --only-bottleneck only shows the bottleneck, as well as its associated metrics if enabled.
* interval-plot has --level and --metrics arguments to configure the inputs. It now defaults to
level 1 only, no metrics to make the plots more readable.
* toplev has a new --reserved-counters option to handle systems that reserve some generic counters.
* toplev has a new --no-sort option to disable grouping metrics with tree nodes.
## TMA 4.6 release
* toplev updated to Ahmad Yasin's TMA 4.6
* Support for Intel Xeon Max processors (SPRHBM)
* New Features:
* Support for optimized power-performance states via C01/C02_Wait nodes under Core Bound category as well as C0_Wait info metric [ADL onwards]
* HBM_Bound: stalls due to High Bandwidth Memory (HBM) accesses by loads.
* C01/C02_Wait: cycles spent in C0.1/C0.2 power-performance optimized states
* Other_Mispredicts: slots wasted due to other cases of misprediction (non-retired x86 branches or other types)
* Other_Nukes: slots wasted due to Nukes (Machine Clears) not related to memory ordering.
* Info.Bottlenecks: Memory_Synchronization, Irregular_Overhead (fixes Instruction_Fetch_BW), Other_Bottlenecks [SKL onwards]
* CPUs_Utilized - Average number of utilized CPUs [all]
* New metrics UC_Load_PKI, L3/DRAM_Bound_L, Spec_Clears_Ratio, EPC [SKL onwards]
* Unknown_Branch_Cost and Uncore_Rejects & Bus_Lock_PKI (support for Resizable Bar) [ADL]
* Enabled FP_Vector_128b/256b nodes in SNB/JKT/IVB/IVT
* Enabled FP_Assists, IpAssist into, as well as Fixed Mixing_Vectors [SKL through TGL]
* TIOPs plus 8 new metrics Offcore_*_PKI and R2C_*_BW [SPR, SPR-HBM]
* Grouped all Uncore-based Mem Info metric under MemOffcore distinct group (to ease skipping their overhead) [all]
* Key Enhancements & fixes
* Reduced # events (multiplexing) for GFLOPs, FLOPc, IpFLOP, FP_Scalar and FP_Vector [BDW onwards]
* Reduced # events (multiplexing) & Fixed Serializing_Operations, Ports_Utilized_0 [ADL onwards]
* Fixed Branch_Misprediction_Cost overestimate, Mispredictions [SKL onwards]
* Fixed undercount in FP_Vector/IpArith (induced by 4.5 update) + Enabled/fixed IO_Read/Write_BW [SPR]
* Tuned #Avg_Assist_Cost [SKL onwards]
* Remove X87_Use [HSW/HSX]
* Renamed Shuffles node & some metrics/groups in Info.Bottlenecks and Info.Memory*. CountDomain fixes
## TMA 4.4 release
* toplev updated to Ahmad Yasin's TMA 4.4
* Add support for Sapphire Rapids servers
* New breakdown of Heavy_Operations, add new nodes for Assists, Page Faults
* A new Int_Operations level 3 node, including Integer Vector and Shuffle
* Support for RDT MBA stalls.
* AMX and FP16 support
* Better FP_Vector breakdown
* Support 4wide MITE breakdown.
* Add new Info.Pipeline Metrics group.
* Support for Retired/Executed uops and String instruction cycles
* Frequency of microcode assits.
* Add Core_Bound_Likely for SMT and IpSWF for software prefetches.
* Cache bandwidth is split per processor and per core.
* Snoop Metric group for cross processor snoops.
* Various bug fixes and improvements.
* Support for running on Alderlake with a hybrid Goldencove / Gracemont model
Add a new --aux option to control the auxillary nodes on Atom.
--cputype atom/core is supported to filter on core types.
* cputop supports an atom/core shortcut to generate the cpu mask of
hybrid CPUs. Use like toplev $(cputop core cpuset) workload
* toplev now supports a --abbrev option to abbreviate node names
* Add experimental --thread option to support per SMT thread measurements on pre ICL
CPUs.
## TMA 4.3 release
* toplev updated to Ahmad Yasin's TMA 4.3: New Retiring.Light_Operations breakdown
*Notes: ADL is missing so far. TGL/RKL still use the ICL model.
if you see missing events please remove ~/.cache/pmu-events/\* to force a redownload*
* New Tree Metrics (nodes)
* A brand new breakdown of the Light_Operations sub-category (under Retiring category) per operation type:
* Memory_Operations for (fraction of retired) slots utilized by load or store memory accesses
* Fused_Instructions for slots utilized by fused instruction pairs (mostly conditional branches)
* Non_Fused_Branches for slots utilized by remaining types of branches.
* (Branch_Instructions is used in lieu of the last two nodes for ICL .. TGL models)
* Nop_Instructions for slots utilized by NOP instructions
* FP_Arith - a fraction estimate of arithmetic floating-point operations (legacy)
* CISC new tree node for complex instructions (under the Heavy_Operations sub-category)
* Decoder0_Alone new tree node for instructions requiring heavy decoder (under the Fetch_Bandwidth sub-category)
* Memory_Fence new tree node for LFENCE stalls (under the Core_Bound sub-category)
* Informative Groups
* New Info.Branches group for branch instructions of certain types: Cond_TK (Conditional TaKen branches), Cond_NT (Conditional Non-Taken), CallRet, Jump and Other_Branches.
* Organized (almost all) Info metrics in 5 mega-buckets of {Fed, Bad, Ret, Cor, Mem} using the Metric Group column
* New Informative Metrics
* UpTB for Uops per Taken Branch
* Slots_Utilization for Fraction of Physical Core issue-slots utilized by this Logical Processor [ICL onwards]
* Execute_per_Issue for the ratio of Uops Executed to Uops Issued (allocated)
* Fetch_UpC for average number of fetched uops when the front-end is delivering uops
* DSB_Misses_Cost for Total penalty related to DSB misses
* IpDSB_Miss_Ret for Instructions per (any) retired DSB miss
* Kernel CPI for Cycles Per Instruction in kernel (operating system) mode
* Key Enhancements & fixes
* Fixed Heavy_Operations for few uop instructions [ICL, ICX, TGL].
* Fixed Fetch_Latency overcount (or Fetch_Bandwidth undercount) [ICL, ICX, TGL]
* Capped nodes using fixed-costs, e.g. DRAM_Bound, to 100% max. Some tools did this in ad-hoc manner thus far [All]
* Fixed DTLB_{Load,Store} and STLB_Hit_{Load,Store} in case of multiple hits per cycles [SKL onwards]
* Fixed Lock_Latency to account for lock that hit in L1D or L2 caches [SKL onwards]
* Fixed Mixing_Vectors and X87_Use to Clocks and Slots Count Domains, respectively [SKL onwards]
* Many other fixes: Thresholds, Tagging (e.g. Ports_Utilized_2), Locate-with, Count Domain, Metric Group, Metric Max, etc
* jestat now supports CSV output (-x,), not aggregated.
* libjevents has utility functions to output event list in perf stat style (both CSV and normal)
* toplev now outputs multiplexing statistics by default. This can be disabled with --no-mux.
* cputop now supports hybrid types (type=="core"/"atom")
* ucevent now supports Icelake Server
* toplev now supports Icelake Server
## TMA 4.2 release
* toplev updated to Ahmad Yasin's TMA 4.2: Bottlenecks Info group, Tuned memory access costs
* New Metrics
* New Info.Bottlenecks group aggregating total performance-issue costs in SLOTS across the tree: [SKL onwards]
* Memory_Latency, Memory_Bandwidth, Memory_Data_TLBs
* Big_Code, Instruction_Fetch_BW, Branching_Overheads and
* Mispredictions (introduced in 4.1 release)
* New tree node for Streaming_Stores [ICL onwards]
* Key Enhancements & fixes
* Tuned memory metrics with up-to-date frequency-based measured costs [TGL, ICX]
* The Average_Frequency is calculated using the TSC (TimeStamp Counter) value
* With this key enhancement #Mem costs become NanoSecond- (was Constant), DurationTimeInMilliSeconds becomes ExternalParameter CountDomain and #Base_Frequency is deprecated
* The previous method of setting frequency using Base_Frequency is deprecated.
* Fixed Ports_Utilization for detection of serializing operations - [issue#339](https://github.com/andikleen/pmu-tools/issues/339) [SKL onwards]
* Tuned MITE, DSB, LSD and move to Slots_Estimated domain [all]
* Capping DTLB_Load and STLB_Hit_Load cost using events in Clocks CountDomain [SKL onwards]
* Tuned Pause latency using default setting [CLX]
* Fixed average Assists cost [IVB onwards]
* Fixed Mispredicts_Resteers Clears_Resteers Branch_Mispredicts Machine_Clears and Mispredictions [ICL+]
* A parameter to avoid using PERF_METRICS MSR e.g. for older OS kernels (implies higher event multiplexing)
* Reduced # events for select nodes collections (lesser event multiplexing): Backend_Bound/Core_Bound, Clear_Resteers/Unknwon_Branches, Kernel_Utilization
* Other fixes: Thresholds, Tagging (e.g. Ports_Utilized_2), Locate-with, etc
* toplev now has a --parallel argument to can process large --import input files
with multiple threads. There is a new interval-merge tool that can merge
multiple perf-output files.
* toplev now supports a --subset argument that can process parts of --import input files,
either by splitting them or by sampling. This is a building block for more efficient
processing of large input files.
* toplev can now generate scripts to collect data with perf stat record to lower runtime
collection overhead, and import the perf.data, using a new --script-record option.
This currently requires unreleased perf patches, hopefully in Linux 5.11.
* toplev can now support json files for Chrome's about://tracing with --json
* toplev now supports --no-multiplex in interval mode (-Ixxx)
* The tools now don't force python 2 anymore to support running out of the box
on distributions which do not install python 2.
* toplev now hides the perf command line by default. Override with --perf.
* Updated to TMA 4.11: Fixed an error in misprediction-related and Power License metrics
* toplev now supports the new fixed TMA metrics counters on Icelake. This requires
the upcoming 5.9+ kernel.
## TMA 4.1 release
* toplev was updated to Ahmad Yasin's/Anton Hanna's TMA 4.1
New Metrics:
- Re-arrange Retiring Level 2 into Light\_Operations & Heavy\_Operations. Light\_Operations replaces
the previous Base (or "General Retirement") while Heavy\_Operations is a superset of the
Microcode\_Sequencer node (that moves to Level 3)
- Mixing\_Vectors: hints on a pitfall when intermixing the newer AVX* with legacy SSE* vectors,
a tree node under Core Bound [SKL onwards]
Key Enhancements & fixes
- Tuning of Level 2 breakdown for Backend\_Bound, Frontend\_Bound (rollback FRONTEND\_RETIRED 2-events use) [SKL onwards]
- Improved branch misprediction related metrics to leverage a new PerfMon event [ICL onwards]
- Improved CORE\_CLKS & #Retire\_Slots-based metrics [ICL onwards]
- Adjusted cost of all nodes using MEM\_LOAD\_\*RETIRED.\* in case of shadow L1 d-cache misses
- renamed Frontend_ to Fetch\_Latency/Bandwidth [all]
- Additional documentation/details to aid automated parsing in ‘For Tool Developers’.
- Other fixes including Thresholds, Tagging (e.g. $issueSnoops), Locate-with, Metric Group
* toplev can now generate charts in xlsx files with the --xchart option.
Older changes in [CHANGES](CHANGES.md)
# Help wanted
- The plotting tools could use a lot of improvements. Both tl-serve and tl-barplot.
If you're good in python or JS plotting any help improving those would be appreciated.
# Mailing list
Please post to the linux-perf-users@vger.kernel.org mailing list.
For bugs please open an issue on https://github.com/andikleen/pmu-tools/issues
# Licenses
ocperf, toplev, ucevent, parser are under GPLv2, jevents is under the modified BSD license.
Andi Kleen
| text/markdown | null | Andi Kleen <pmu-tools@halobates.de> | null | null | null | null | [
"License :: OSI Approved :: GNU General Public License v2 (GPLv2)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/andikleen/pmu-tools/",
"Issues, https://github.com/andikleen/pmu-tools/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:32:48.678165 | pmu_tools-2026.2.21.0.tar.gz | 1,203,508 | 6a/33/b9994d942f54c3b6dca0f6998e5981f74fa28dfae4e5eaf494eb5c51f248/pmu_tools-2026.2.21.0.tar.gz | source | sdist | null | false | faf56fcef68d3bef88d48c143e82ca73 | 454185f00cacc4fd294e7722e4706e27cda0aad12f2f4130ae11a9865aa38a82 | 6a33b9994d942f54c3b6dca0f6998e5981f74fa28dfae4e5eaf494eb5c51f248 | GPL-2.0-only | [
"COPYING"
] | 219 |
2.4 | essence-wars | 0.8.8 | High-performance card game environment | <p align="center">
<img src="crates/essence-wars-ui/static/ui/essence_wars_banner.webp" alt="Essence Wars" width="700">
</p>
<p align="center">
<strong>A deterministic, perfect-information strategy card game designed for AI research</strong>
</p>
<p align="center">
<a href="#quick-start">Quick Start</a> •
<a href="#features">Features</a> •
<a href="#project-structure">Project Structure</a> •
<a href="#documentation">Documentation</a> •
<a href="#contributing">Contributing</a>
</p>
---
## What is Essence Wars?
Essence Wars is a strategic two-player card game engine with a focus on:
- **Perfect Information** — Both players see all cards, including hands and decks. Victory comes from outthinking your opponent, not luck.
- **Deterministic Execution** — Seeded RNG ensures every game is reproducible, perfect for AI training and analysis.
- **Lane-Based Combat** — Creatures occupy board positions, creating spatial strategy alongside card selection.
- **ML-Ready Design** — 328-float state tensors, 256 discrete actions, and high-performance Rust engine (~33k games/sec).
<p align="center">
<img src="docs/screenshots/human-vs-ai.webp" alt="Gameplay" width="80%">
</p>
## Features
### Desktop Application
Full-featured Tauri desktop client with Human vs AI, Spectator mode, Replays, Deck Builder, and Tutorial.
<p align="center">
<img src="docs/screenshots/main-menu.webp" alt="Main Menu" width="32%">
<img src="docs/screenshots/spectator-mode.webp" alt="Spectator Mode" width="32%">
<img src="docs/screenshots/deck-builder.webp" alt="Deck Builder" width="32%">
</p>
### AI & Machine Learning
- **Multiple Bot Types** — Random, Greedy (28 tunable weights), MCTS, Alpha-Beta search
- **Python ML Framework** — Gymnasium/PettingZoo environments, PPO, AlphaZero, Card2Vec
- **Training Infrastructure** — Callbacks, experiment tracking, Elo ratings, HTML reports
### Claude Code Integration
Play Essence Wars directly in Claude Code via MCP (Model Context Protocol):
- Natural language game control
- AI move recommendations with analysis
- Live game visualization in Tauri UI
## Quick Start
### Play the Desktop App
```bash
# Clone and build
git clone https://github.com/christianwissmann85/essence-wars
cd essence-wars
cargo build --release
# Launch the UI
cd crates/essence-wars-ui
pnpm install && pnpm tauri:dev
```
### Train ML Agents (Python)
```bash
# Install Python package
pip install essence-wars[train]
# Train a PPO agent
essence-wars train ppo --timesteps 500000
# Benchmark against baselines
essence-wars benchmark --checkpoint model.pt
```
### Play via Claude Code (MCP)
Add to your `.mcp.json`:
```json
{
"mcpServers": {
"essence-wars": {
"command": "./target/release/essence-wars-mcp"
}
}
}
```
Then in Claude Code: *"Start a game with the Iron Wall deck against Swarm Aggro"*
## Project Structure
```
essence-wars/
├── crates/
│ ├── cardgame/ # Core game engine (Rust)
│ ├── essence-wars-ui/ # Desktop app (Tauri + Svelte)
│ └── essence-wars-mcp/ # MCP server for Claude Code
├── python/ # ML agents & training (PyTorch)
├── data/
│ ├── cards/ # Card definitions (YAML)
│ ├── decks/ # Deck definitions (TOML)
│ └── weights/ # Bot weight files
└── docs/ # Documentation
```
| Component | Description | README |
|-----------|-------------|--------|
| **cardgame** | High-performance game engine, bots, CLI tools | [README](crates/cardgame/README.md) |
| **essence-wars-ui** | Cross-platform desktop client | [README](crates/essence-wars-ui/README.md) |
| **essence-wars-mcp** | MCP server for Claude Code | [README](crates/essence-wars-mcp/README.md) |
| **python** | ML agents, training, benchmarking | [README](python/README.md) |
## Three Factions
| Faction | Identity | Keywords |
|---------|----------|----------|
| **Argentum Combine** | Defense & durability | Guard, Shield, Fortify, Piercing |
| **Symbiote Circles** | Aggressive tempo | Rush, Lethal, Regenerate, Volatile |
| **Obsidion Syndicate** | Burst damage | Lifesteal, Stealth, Quick, Charge |
Plus **Neutral** cards and **12 unique Commanders** with passive abilities.
## Documentation
### Game Design
- [Game Rules](docs/game-design/rules.md) — Complete rules, keywords, card types
- [Lore & World](docs/game-design/lore.md) — Faction history and characters
### Art & Assets
- [Style Guide](docs/art/style-guide.md) — Visual design guidelines
- [Asset Generation](docs/art/asset-generation.md) — AI-assisted art pipeline
### ML & AI
- [Bot System](docs/ml-infrastructure/bots.md) — Bot architecture, weight tuning
- [Training Pipeline](docs/ml-infrastructure/training.md) — PPO, AlphaZero, behavioral cloning
- [Ratings System](docs/ml-infrastructure/ratings.md) — Elo tracking for decks and agents
### Development
- [CLI Reference](docs/development/cli.md) — Command-line tools
- [Reporting](docs/development/reporting.md) — Analysis and visualization
## Performance
| Metric | Value |
|--------|-------|
| Random games | ~33,000/sec |
| Greedy bot games | ~4,300/sec |
| MCTS (100 sims) | ~22ms/move |
| Engine state clone | ~245 ns |
| State tensor encode | ~158 ns |
## Commands
```bash
# Build & Test
cargo build --release # Full workspace
cargo nextest run --status-level=fail # Rust tests
uv run pytest python/tests # Python tests
# Lint
cargo lint # Rust (clippy)
uv run mypy python/essence_wars # Python types
pnpm run check # Svelte/TS (in UI crate)
# Game Tools
cargo run --release --bin arena -- # Bot matches
cargo run --release --bin validate -- # Quick balance check
cargo run --release --bin benchmark -- # Thorough analysis
cargo run --release --bin swiss -- # Tournament mode
```
## Contributing
Contributions are welcome! This project uses:
- **Rust** for the core engine and desktop backend
- **Svelte 5** (with runes) for the desktop frontend
- **Python** for ML agents and training
Each crate has its own `CLAUDE.md` with AI-developer context and `README.md` with human contributor documentation.
### Development Setup
```bash
# Rust
cargo build --release
# Python (using uv)
uv sync --all-groups
uv run pytest python/tests
# UI (using pnpm)
cd crates/essence-wars-ui
pnpm install
pnpm tauri:dev
```
## Citation
If you use Essence Wars in your research, please cite:
```bibtex
@software{essence_wars,
title = {Essence Wars: A High-Performance Card Game Environment for RL Research},
author = {Wissmann, Christian},
year = {2025},
url = {https://github.com/christianwissmann85/essence-wars}
}
```
## License
MIT License — see [LICENSE](LICENSE) for details.
---
<p align="center">
<strong>Built with Rust, Svelte, and PyTorch</strong><br>
<em>Designed for humans and AIs alike</em>
</p>
| text/markdown; charset=UTF-8; variant=GFM | Christian Wissmann | null | null | null | MIT | reinforcement-learning, game, gymnasium, ai, mcts, card-game, pettingzoo | [
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Languag... | [] | null | null | >=3.10 | [] | [] | [] | [
"numpy>=1.24.0",
"gymnasium>=0.29.0",
"essence-wars[analysis,artgen,cloud,hub,pettingzoo,train]; extra == \"all\"",
"pandas>=2.2.0; extra == \"analysis\"",
"matplotlib>=3.8.0; extra == \"analysis\"",
"seaborn>=0.13.0; extra == \"analysis\"",
"plotly>=5.18.0; extra == \"analysis\"",
"jinja2>=3.1.0; ext... | [] | [] | [] | [
"Documentation, https://christianWissmann85.github.io/essence-wars/",
"Homepage, https://github.com/christianWissmann85/essence-wars",
"Issues, https://github.com/christianWissmann85/essence-wars/issues",
"Repository, https://github.com/christianWissmann85/essence-wars"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:32:42.658902 | essence_wars-0.8.8.tar.gz | 732,306 | 9b/28/403a79bed239d38d6f15f05098195e5e521cc7fe5840a32ebcf886a1b8ea/essence_wars-0.8.8.tar.gz | source | sdist | null | false | 863753da9a87472034290e8352bbea43 | 9ea46862929858bab5be522ff5f794c2a89cd84fd2d70a7e7b6a47454c9bec19 | 9b28403a79bed239d38d6f15f05098195e5e521cc7fe5840a32ebcf886a1b8ea | null | [
"LICENSE"
] | 759 |
2.4 | pulumi-kubernetes-ingress-nginx | 0.2.0a1771649924 | Strongly-typed NGINX Ingress Controller installation | # Pulumi NGINX Ingress Controller Component
This repo contains the Pulumi NGINX Ingress Controller component for Kubernetes. This ingress controller
uses NGINX as a reverse proxy and load balancer.
This component wraps [the Kubernetes Provided NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx),
and offers a Pulumi-friendly and strongly-typed way to manage ingress controller installations.
After installing this component to your cluster, you can use it by adding the
`kubernetes.io/ingress.class: nginx` annotation to your `Ingress` resources.
For examples of usage, see [the official documentation](
https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/), or refer to [the examples](/examples)
in this repo.
## To Use
To use this component, first install the Pulumi Package:
Afterwards, import the library and instantiate it within your Pulumi program:
## Configuration
This component supports all of the configuration options of the [official Helm chart](
https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx), except that these
are strongly typed so you will get IDE support and static error checking.
The Helm deployment uses reasonable defaults, including the chart name and repo URL, however,
if you need to override them, you may do so using the `helmOptions` parameter. Refer to
[the API docs for the `kubernetes:helm/v3:Release` Pulumi type](
https://www.pulumi.com/docs/reference/pkg/kubernetes/helm/v3/release/#inputs) for a full set of choices.
For complete details, refer to the Pulumi Package details within the Pulumi Registry.
| text/markdown | null | null | null | null | Apache-2.0 | pulumi, kubernetes, nginx, kind/component, category/infrastructure | [] | [] | null | null | >=3.9 | [] | [] | [] | [
"parver>=0.2.1",
"pulumi<4.0.0,>=3.0.0",
"pulumi-kubernetes<5.0.0,>=4.0.0",
"semver>=2.8.1",
"typing-extensions<5,>=4.11; python_version < \"3.11\""
] | [] | [] | [] | [
"Homepage, https://pulumi.io",
"Repository, https://github.com/pulumi/pulumi-kubernetes-ingress-nginx"
] | twine/5.0.0 CPython/3.11.8 | 2026-02-21T05:32:40.138718 | pulumi_kubernetes_ingress_nginx-0.2.0a1771649924.tar.gz | 35,259 | 4a/8a/15f6e39d7a669f44cd6346058de235c54e7a13a83079baef5edcfa35a16a/pulumi_kubernetes_ingress_nginx-0.2.0a1771649924.tar.gz | source | sdist | null | false | e4d7cd6aa70556b6f4b032d2f023a83b | 73ca2c833acb0252540720325eaee874a105cf422409bdda186f04cf2951f6ac | 4a8a15f6e39d7a669f44cd6346058de235c54e7a13a83079baef5edcfa35a16a | null | [] | 198 |
2.4 | iamdata | 0.1.202602211 | IAM data for AWS actions, resources, and conditions based on IAM policy documents. Checked for updates daily. | # IAM Data In Python Package
This is a simple package for utilizing AWS IAM data for Services, Actions, Resources, and Condition Keys. Data is embedded in the python package.
New data is checked against the AWS IAM documentation and updated daily if there are changes.
## Installation
```bash
pip install iam-data
```
## Usage
```python
from iamdata import IAMData
iam_data = IAMData()
print(f"Data Version {iam_data.data_version()} updated at {iam_data.data_updated_at()}")
for service_key in iam_data.services.get_service_keys():
service_name = iam_data.services.get_service_name(service_key)
print(f"Getting Actions for {service_name}")
for action in iam_data.actions.get_actions_for_service(service_key):
action_details = iam_data.actions.get_action_details(service_key, action)
print(f"{service_key}:{action} => {action_details}")
```
## API
### Services
* `services.get_service_keys()` - Returns a list of all service keys such as 's3', 'ec2', etc.
* `services.get_service_name(service_key)` - Returns the service name for a given service key.
* `services.service_exists(service_key)` - Returns True if the service key exists.
### Actions
* `actions.get_actions_for_service(service_key)` - Returns an array of all actions for a given service key.
* `actions.get_action_details(service_key, action_key)` - Returns an object with the action details such as `description`, `resourceTypes`, and `conditionKeys`.
* `actions.action_exists(service_key, action_key)` - Returns true if the action exists.
### Resources
* `resources.get_resource_types_for_service(service_key)` - Returns an array of all resource types for a given service key.
* `resources.get_resource_type_details(service_key, resource_type_key)` - Returns an object with the resource type details such as `description`, `arnFormat`, and `conditionKeys`.
* `resources.resource_type_exists(service_key, resource_type_key)` - Returns true if the resource type exists.
### Conditions Keys
* `conditions.get_condition_keys_for_service(service_key)` - Returns an array of all condition keys for a given service key.
* `conditions.get_condition_key_details(service_key, condition_key)` - Returns an object with the condition key details such as `description`, `conditionValueTypes`, and `conditionOperators`.
* `conditions.condition_key_exists(service_key, condition_key)` - Returns true if the condition key exists.
### Version Info
The version is number is formatted as `major.minor.updatedAt`. The updatedAt is the date the data was last updated in the format `YYYYMMDDX` where `X` is a counter to enable publishing more than once per day if necessary. For example version `0.1.202408291` has data updated on August 29th, 2024.
The version can be accessed using the `data_version()` method.
There is also `date_updated_at()` which returns the date the data was last updated.
| text/markdown | null | David Kerber <dave@cloudcopilot.io> | null | null | null | null | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.7 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/cloud-copilot/iam-data-python",
"Issues, https://github.com/cloud-copilot/iam-data-python/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T05:31:21.806215 | iamdata-0.1.202602211.tar.gz | 774,045 | 75/5b/88dc0c735a854b01333374022a34b38601d4d4fde1c2aa68bb6b8e34c1a8/iamdata-0.1.202602211.tar.gz | source | sdist | null | false | 0b5952df3cff5e59b90be1b9550ba029 | cc81535c90b309bf7856121250b3937e45da0d55fba8abc7b803e83864ae4292 | 755b88dc0c735a854b01333374022a34b38601d4d4fde1c2aa68bb6b8e34c1a8 | null | [
"LICENSE.txt"
] | 872 |
2.4 | fred-oss | 0.66.0 | FREDOSS | # FREDOSS
This is the open-source baseline python package called `fred` package by `fred.fahera.mx` (Fahera's Research, Education, and Development Team).
## Installation
```
$ pip install fred-oss
```
By default, the `fred-oss` package will only install the `default` dependencies. You can control which
dependency set to use via the 'dependency tags' via the following pattern:
```
$ pip install 'fred-oss[<tag-1>,<tag-2>,...]'
```
Where `<tag-i>` can be:
* `default`
* `all`
* ...
| text/markdown | Fahera Research, Education, and Development | fred@fahera.mx | null | null | null | null | [] | [] | https://fred.fahera.mx | null | >=3.12 | [] | [] | [] | [
"fire==0.7.1",
"psutil==7.0.0",
"dill==0.4.0",
"redis==6.4.0",
"requests==2.32.5",
"fastapi==0.116.2",
"uvicorn[standard]==0.35.0",
"minio==7.2.18",
"pillow==11.3.0",
"torch==2.10.0"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:28:19.003243 | fred_oss-0.66.0.tar.gz | 61,408 | 94/bd/7e7b3d21e38d6903960b801ebd291926f036e6f52003434255d1c306fe30/fred_oss-0.66.0.tar.gz | source | sdist | null | false | d522ddcfcd8e138d142a34409ed3f379 | 9a2aaf866184774bb8e73736b3d00104e843b5614556c92e77e0c406ef1acdcd | 94bd7e7b3d21e38d6903960b801ebd291926f036e6f52003434255d1c306fe30 | null | [
"NOTICE.txt"
] | 175 |
2.3 | karpo-sdk | 0.2.0 | The official Python library for the karpo API | # Karpo Python API library
<!-- prettier-ignore -->
[)](https://pypi.org/project/karpo-sdk/)
The Karpo Python library provides convenient access to the Karpo REST API from any Python 3.9+
application. The library includes type definitions for all request params and response fields,
and offers both synchronous and asynchronous clients powered by [httpx](https://github.com/encode/httpx).
It is generated with [Stainless](https://www.stainless.com/).
## Documentation
The REST API documentation can be found on [docs.karpo.ai](https://docs.karpo.ai). The full API of this library can be found in [api.md](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/api.md).
## Installation
```sh
# install from PyPI
pip install karpo-sdk
```
## Usage
The full API of this library can be found in [api.md](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/api.md).
```python
import os
from karpo_sdk import Karpo
client = Karpo(
api_key=os.environ.get("KARPO_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="staging",
)
page = client.agents.list()
print(page.data)
```
While you can provide an `api_key` keyword argument,
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
to add `KARPO_API_KEY="My API Key"` to your `.env` file
so that your API Key is not stored in source control.
## Async usage
Simply import `AsyncKarpo` instead of `Karpo` and use `await` with each API call:
```python
import os
import asyncio
from karpo_sdk import AsyncKarpo
client = AsyncKarpo(
api_key=os.environ.get("KARPO_API_KEY"), # This is the default and can be omitted
# defaults to "production".
environment="staging",
)
async def main() -> None:
page = await client.agents.list()
print(page.data)
asyncio.run(main())
```
Functionality between the synchronous and asynchronous clients is otherwise identical.
### With aiohttp
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
You can enable this by installing `aiohttp`:
```sh
# install from PyPI
pip install karpo-sdk[aiohttp]
```
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
```python
import os
import asyncio
from karpo_sdk import DefaultAioHttpClient
from karpo_sdk import AsyncKarpo
async def main() -> None:
async with AsyncKarpo(
api_key=os.environ.get("KARPO_API_KEY"), # This is the default and can be omitted
http_client=DefaultAioHttpClient(),
) as client:
page = await client.agents.list()
print(page.data)
asyncio.run(main())
```
## Using types
Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
- Serializing back into JSON, `model.to_json()`
- Converting to a dictionary, `model.to_dict()`
Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
## Pagination
List methods in the Karpo API are paginated.
This library provides auto-paginating iterators with each list response, so you do not have to request successive pages manually:
```python
from karpo_sdk import Karpo
client = Karpo()
all_agents = []
# Automatically fetches more pages as needed.
for agent in client.agents.list():
# Do something with agent here
all_agents.append(agent)
print(all_agents)
```
Or, asynchronously:
```python
import asyncio
from karpo_sdk import AsyncKarpo
client = AsyncKarpo()
async def main() -> None:
all_agents = []
# Iterate through items across all pages, issuing requests as needed.
async for agent in client.agents.list():
all_agents.append(agent)
print(all_agents)
asyncio.run(main())
```
Alternatively, you can use the `.has_next_page()`, `.next_page_info()`, or `.get_next_page()` methods for more granular control working with pages:
```python
first_page = await client.agents.list()
if first_page.has_next_page():
print(f"will fetch next page using these details: {first_page.next_page_info()}")
next_page = await first_page.get_next_page()
print(f"number of items we just fetched: {len(next_page.data)}")
# Remove `await` for non-async usage.
```
Or just work directly with the returned data:
```python
first_page = await client.agents.list()
for agent in first_page.data:
print(agent.id)
# Remove `await` for non-async usage.
```
## Handling errors
When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `karpo_sdk.APIConnectionError` is raised.
When the API returns a non-success status code (that is, 4xx or 5xx
response), a subclass of `karpo_sdk.APIStatusError` is raised, containing `status_code` and `response` properties.
All errors inherit from `karpo_sdk.APIError`.
```python
import karpo_sdk
from karpo_sdk import Karpo
client = Karpo()
try:
client.agents.list()
except karpo_sdk.APIConnectionError as e:
print("The server could not be reached")
print(e.__cause__) # an underlying Exception, likely raised within httpx.
except karpo_sdk.RateLimitError as e:
print("A 429 status code was received; we should back off a bit.")
except karpo_sdk.APIStatusError as e:
print("Another non-200-range status code was received")
print(e.status_code)
print(e.response)
```
Error codes are as follows:
| Status Code | Error Type |
| ----------- | -------------------------- |
| 400 | `BadRequestError` |
| 401 | `AuthenticationError` |
| 403 | `PermissionDeniedError` |
| 404 | `NotFoundError` |
| 422 | `UnprocessableEntityError` |
| 429 | `RateLimitError` |
| >=500 | `InternalServerError` |
| N/A | `APIConnectionError` |
### Retries
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
429 Rate Limit, and >=500 Internal errors are all retried by default.
You can use the `max_retries` option to configure or disable retry settings:
```python
from karpo_sdk import Karpo
# Configure the default for all requests:
client = Karpo(
# default is 2
max_retries=0,
)
# Or, configure per-request:
client.with_options(max_retries=5).agents.list()
```
### Timeouts
By default requests time out after 1 minute. You can configure this with a `timeout` option,
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:
```python
from karpo_sdk import Karpo
# Configure the default for all requests:
client = Karpo(
# 20 seconds (default is 1 minute)
timeout=20.0,
)
# More granular control:
client = Karpo(
timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
)
# Override per-request:
client.with_options(timeout=5.0).agents.list()
```
On timeout, an `APITimeoutError` is thrown.
Note that requests that time out are [retried twice by default](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/#retries).
## Advanced
### Logging
We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
You can enable logging by setting the environment variable `KARPO_LOG` to `info`.
```shell
$ export KARPO_LOG=info
```
Or to `debug` for more verbose logging.
### How to tell whether `None` means `null` or missing
In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
```py
if response.my_field is None:
if 'my_field' not in response.model_fields_set:
print('Got json like {}, without a "my_field" key present at all.')
else:
print('Got json like {"my_field": null}.')
```
### Accessing raw response data (e.g. headers)
The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
```py
from karpo_sdk import Karpo
client = Karpo()
response = client.agents.with_raw_response.list()
print(response.headers.get('X-My-Header'))
agent = response.parse() # get the object that `agents.list()` would have returned
print(agent.id)
```
These methods return an [`APIResponse`](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/src/karpo_sdk/_response.py) object.
The async client returns an [`AsyncAPIResponse`](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/src/karpo_sdk/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
#### `.with_streaming_response`
The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
```python
with client.agents.with_streaming_response.list() as response:
print(response.headers.get("X-My-Header"))
for line in response.iter_lines():
print(line)
```
The context manager is required so that the response will reliably be closed.
### Making custom/undocumented requests
This library is typed for convenient access to the documented API.
If you need to access undocumented endpoints, params, or response properties, the library can still be used.
#### Undocumented endpoints
To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
http verbs. Options on the client will be respected (such as retries) when making this request.
```py
import httpx
response = client.post(
"/foo",
cast_to=httpx.Response,
body={"my_param": True},
)
print(response.headers.get("x-foo"))
```
#### Undocumented request params
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.
#### Undocumented response properties
To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
### Configuring the HTTP client
You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
- Custom [transports](https://www.python-httpx.org/advanced/transports/)
- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
```python
import httpx
from karpo_sdk import Karpo, DefaultHttpxClient
client = Karpo(
# Or use the `KARPO_BASE_URL` env var
base_url="http://my.test.server.example.com:8083",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
You can also customize the client on a per-request basis by using `with_options()`:
```python
client.with_options(http_client=DefaultHttpxClient(...))
```
### Managing HTTP resources
By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
```py
from karpo_sdk import Karpo
with Karpo() as client:
# make requests here
...
# HTTP client is now closed
```
## Versioning
This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
1. Changes that only affect static types, without breaking runtime behavior.
2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
3. Changes that we do not expect to impact the vast majority of users in practice.
We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
We are keen for your feedback; please open an [issue](https://www.github.com/machinepulse-ai/karpo-op-python-sdk/issues) with questions, bugs, or suggestions.
### Determining the installed version
If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
You can determine the version that is being used at runtime with:
```py
import karpo_sdk
print(karpo_sdk.__version__)
```
## Requirements
Python 3.9 or higher.
## Contributing
See [the contributing documentation](https://github.com/machinepulse-ai/karpo-op-python-sdk/tree/main/./CONTRIBUTING.md).
| text/markdown | null | Karpo <contact@example.com> | null | null | Apache-2.0 | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: OS Independent",
"Operating System :: POSIX",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"anyio<5,>=3.5.0",
"distro<2,>=1.7.0",
"httpx<1,>=0.23.0",
"pydantic<3,>=1.9.0",
"sniffio",
"typing-extensions<5,>=4.10",
"aiohttp; extra == \"aiohttp\"",
"httpx-aiohttp>=0.1.9; extra == \"aiohttp\""
] | [] | [] | [] | [
"Homepage, https://github.com/machinepulse-ai/karpo-op-python-sdk",
"Repository, https://github.com/machinepulse-ai/karpo-op-python-sdk"
] | twine/5.1.1 CPython/3.12.9 | 2026-02-21T05:28:03.453048 | karpo_sdk-0.2.0.tar.gz | 136,609 | 55/93/1deb34ac123cf8e32cd4260d2b1d11eae615fdf2764672adca2177d4582e/karpo_sdk-0.2.0.tar.gz | source | sdist | null | false | be1df3ddf78b2d3f031021142f8a1601 | 6eec0a7407dced754afca3b3067b1c2ade73df1842b50b90725739d5d202f662 | 55931deb34ac123cf8e32cd4260d2b1d11eae615fdf2764672adca2177d4582e | null | [] | 228 |
2.4 | retio-pagemap | 0.5.0 | Structured web page representation for AI agents — 97% HTML token reduction | <!-- mcp-name: io.github.Retio-ai/pagemap -->
# PageMap
**The browsing MCP server that fits in your context window.**
Compresses ~100K-token HTML into a 2-5K-token structured map while preserving every actionable element. AI agents can **read and interact** with any web page at 97% fewer tokens.
> *"Give your agent eyes and hands on the web."*
[](https://github.com/Retio-ai/Retio-pagemap/actions/workflows/ci.yml)
[](https://pypi.org/project/retio-pagemap/)
[](https://pypi.org/project/retio-pagemap/)
[](https://www.gnu.org/licenses/agpl-3.0)
---
## Why PageMap?
Playwright MCP dumps 50-540KB accessibility snapshots per page, overflowing context windows after 2-3 navigations. Firecrawl and Jina convert HTML to markdown — read-only, no interaction.
PageMap gives your agent a **compressed, actionable** view of any web page:
| | PageMap | Playwright MCP | Firecrawl | Jina Reader |
|--|:------:|:---------:|:-----------:|:--------:|
| **Tokens / page** | **2-5K** | 6-50K | 10-50K | 10-50K |
| **Interaction** | **click / type / select / hover** | Raw tree parsing | Read-only | Read-only |
| **Multi-page sessions** | **Unlimited** | Breaks at 2-3 pages | N/A | N/A |
| **Task success (94 tasks)** | **63.6%** | 61.5% | 64.5% | 57.8% |
| **Avg tokens / task** | **2,403** | 13,737 | 13,886 | 11,423 |
| **Cost / 94 tasks** | **$0.97** | $4.09 | $3.97 | $2.26 |
> Benchmarked across 11 e-commerce sites, 94 static tasks, 7 conditions. PageMap matches competitors in accuracy while using **5.7x fewer tokens** and is the only tool that supports **interaction**.
---
## Quick Start
### MCP Server (Claude Code / Cursor)
```bash
pip install retio-pagemap
playwright install chromium
```
Add to your project's `.mcp.json`:
```json
{
"mcpServers": {
"pagemap": {
"command": "uvx",
"args": ["retio-pagemap"]
}
}
}
```
Restart your IDE. Nine tools become available:
| Tool | Description |
|------|-------------|
| `get_page_map` | Navigate to URL, return structured PageMap with ref numbers |
| `execute_action` | Click, type, select, hover on elements by ref number |
| `get_page_state` | Lightweight page state check (URL, title) |
| `take_screenshot` | Capture viewport or full-page screenshot |
| `navigate_back` | Go back in browser history |
| `scroll_page` | Scroll up/down by page, half-page, or pixel amount |
| `fill_form` | Batch-fill multiple form fields in one call |
| `wait_for` | Wait for text to appear or disappear on the page |
| `batch_get_page_map` | Get Page Maps for multiple URLs in parallel |
### CLI
```bash
pagemap build --url "https://www.example.com/product/123"
```
---
## Output Example
```yaml
URL: https://www.example.com/product/air-max-90
Title: Nike Air Max 90
Type: product_detail
## Actions
[1] searchbox: Search (type)
[2] button: Add to Cart (click)
[3] combobox: Size (select) options=[250,255,260,265,270]
[4] button: Buy Now (click)
## Info
<h1>Nike Air Max 90</h1>
<span itemprop="price">139,000</span>
<span itemprop="ratingValue">4.7</span>
<span>2,341 reviews</span>
## Images
[1] https://cdn.example.com/air-max-90-1.jpg
```
An agent reads the page and executes `execute_action(ref=3, action="select", value="260")` to select a size — all in one context window.
---
## How It Works
```
Raw HTML (~100K tokens)
→ PageMap (2-5K tokens)
├── Actions Interactive elements with numbered refs
├── Info Compressed HTML (prices, titles, key info)
├── Images Product image URLs
└── Metadata Structured data (JSON-LD, Open Graph)
```
**Pipeline:**
```
URL → Playwright Browser
├─→ AX Tree ──→ 3-Tier Interactive Detector
└─→ HTML ─────→ 5-Stage Pruning Pipeline
1. HTMLRAG preprocessing
2. Script extraction (JSON-LD, RSC payloads)
3. Semantic filtering (nav, footer, aside)
4. Schema-aware chunk selection
5. Attribute stripping & compression
→ Budget-aware assembly → PageMap
```
### Interactive Detection (3-Tier)
| Tier | Source | Examples |
|:----:|--------|----------|
| 1 | ARIA roles with names | Buttons, links, menus |
| 2 | Implicit HTML roles | `<input>`, `<select>`, `<textarea>` |
| 3 | CDP event listeners | Divs/spans with click handlers |
---
## Reliability
`execute_action` is built for real-world web pages:
- **Locator fallback chain** — `get_by_role(exact)` → CSS selector → degraded match. Handles duplicate labels, dynamic IDs, and shadow DOM
- **Auto-retry** — up to 2 retries within 15s budget with locator re-resolution. Click retried only on pre-dispatch failures to prevent double-submission
- **DOM change detection** — structural fingerprint comparison catches URL-stable mutations (modals, SPA navigations, accordion toggles). Stale refs auto-invalidated
- **Popup & tab handling** — new tabs/popups auto-detected, SSRF-checked, and switched to. Blocked popups closed automatically
- **JS dialog handling** — alert/beforeunload auto-accepted, confirm/prompt auto-dismissed. Dialog content buffered and reported to the agent
- **Crash recovery** — 30s action timeout, browser death detection, automatic session invalidation with recovery guidance
---
## Security
PageMap treats all web content as **untrusted input**:
- **SSRF Defense** — 4-layer protection: scheme whitelist, DNS rebinding defense, private IP blocking, post-redirect DNS revalidation, context-level route guard
- **Browser Hardening** — WebRTC IP leak prevention, ServiceWorker blocking, internal protocol blocking (`view-source:`, `blob:`, `data:`), Markdown injection defense
- **Prompt Injection Defense** — nonce-based content boundaries, role-prefix stripping, Unicode control char removal
- **Action Sandboxing** — whitelisted actions only, dangerous key combos blocked, affordance-action compatibility pre-check
- **Input Validation** — value length limits, timeout enforcement, error sanitization
### Local Development
By default, PageMap blocks all private network access (localhost, 192.168.x.x, etc.)
as an SSRF defense. For local development workflows, enable `--allow-local`:
**Option A: CLI flag**
```json
{ "command": "uvx", "args": ["retio-pagemap", "--allow-local"] }
```
**Option B: Environment variable** (containerized deployments)
```json
{ "command": "uvx", "args": ["retio-pagemap"], "env": {"PAGEMAP_ALLOW_LOCAL": "1"} }
```
Cloud metadata endpoints (169.254.x.x, metadata.google.internal) remain blocked.
---
## Multilingual Support
Built-in i18n for price, review, rating, and pagination extraction:
| Language | Locale | Price formats | Keywords |
|----------|:------:|---------------|----------|
| Korean | `ko` | 원, ₩ | 리뷰, 평점, 다음, 더보기 |
| English | `en` | $, £, € | reviews, rating, next, load more |
| Japanese | `ja` | ¥, 円 | レビュー, 評価, 次へ |
| French | `fr` | €, CHF | avis, note, suivant |
| German | `de` | €, CHF | Bewertungen, Bewertung, weiter |
Locale is auto-detected from the URL domain.
---
## Python API
```python
import asyncio
from pagemap.browser_session import BrowserSession
from pagemap.page_map_builder import build_page_map_live
from pagemap.serializer import to_agent_prompt, to_json
async def main():
async with BrowserSession() as session:
page_map = await build_page_map_live(session, "https://example.com/product/123")
# Agent-optimized text format
print(to_agent_prompt(page_map))
# Structured JSON
print(to_json(page_map))
# Direct field access
print(page_map.page_type) # "product_detail"
print(page_map.interactables) # [Interactable(ref=1, role="button", ...)]
print(page_map.pruned_context) # compressed HTML
print(page_map.images) # ["https://cdn.example.com/img.jpg"]
print(page_map.metadata) # {"name": "...", "price": "..."}
asyncio.run(main())
```
For offline processing (no browser):
```python
from pagemap.page_map_builder import build_page_map_offline
html = open("page.html").read()
page_map = build_page_map_offline(html, url="https://example.com/product/123")
```
---
## Requirements
- Python 3.11+
- Chromium (`playwright install chromium`)
## Community
Have a question or idea? Join the conversation in [GitHub Discussions](https://github.com/Retio-ai/Retio-pagemap/discussions).
## License
AGPL-3.0-only — see [LICENSE](LICENSE) for the full text.
For commercial licensing options, contact **retio1001@retio.ai**.
---
*PageMap — Structured Web Intelligence for the Agent Era.*
| text/markdown | Retio AI | null | null | null | AGPL-3.0-only | ai-agent, html-compression, mcp, playwright, token-reduction, web-scraping | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Pro... | [] | null | null | >=3.11 | [] | [] | [] | [
"beautifulsoup4>=4.9.0",
"lxml>=5.0.0",
"mcp>=1.0.0",
"playwright>=1.40.0",
"pydantic>=2.0.0",
"pyyaml>=6.0",
"rapidfuzz>=3.0.0",
"tiktoken>=0.5.0",
"anthropic>=0.40.0; extra == \"benchmark\"",
"firecrawl-py>=1.0.0; extra == \"benchmark\"",
"html2text>=2024.2.0; extra == \"benchmark\"",
"httpx... | [] | [] | [] | [
"Homepage, https://github.com/Retio-ai/pagemap",
"Repository, https://github.com/Retio-ai/pagemap",
"Issues, https://github.com/Retio-ai/pagemap/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T05:26:58.041875 | retio_pagemap-0.5.0-py3-none-any.whl | 151,522 | 78/56/4e24dcac339db668fc397947182569d7005702d8049710b86ae78b04c5b9/retio_pagemap-0.5.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 2629a6942557874aee6bbdcb0e40d654 | ea66f049e4acbb47f703867d5489bb8b1d42ef3a7de7588f3ec2d18deafc5687 | 78564e24dcac339db668fc397947182569d7005702d8049710b86ae78b04c5b9 | null | [
"LICENSE"
] | 229 |
2.4 | pychop | 0.4.5 | Python code for simulating low precision floating-point arithmetic | <div align="center">
<img src="docs/imgs/pychop_logo.png" width="330">
# Pychop: efficient reduced-precision quantization library
[]([https://anaconda.org/conda-forge/pychop](https://dev.azure.com/conda-forge/feedstock-builds/_build/latest?definitionId=26671&branchName=main)) [](https://github.com/inEXASCALE/pychop/actions/workflows/codecov.yml) [](https://pypi.org/project/pychop/) [](https://anaconda.org/conda-forge/pychop)
[](https://pypi.python.org/pypi/pychop/)
[](https://pypi.org/project/pychop)
[](https://pychop.readthedocs.io/en/latest/?badge=latest)
[](https://anaconda.org/conda-forge/pychop)
</div>
With the increasing availability of lower-precision floating-point arithmetic beyond IEEE 64-bit/32-bit precision, both in hardware and software simulation, reduced-precision formats such as 16-bit half precision have gained significant attention in scientific computing and machine learning domains. These formats offer higher computational throughput, reduced data transfer overhead, and lower energy consumption.
Inspired by MATLAB’s chop function by Nick Higham, ``Pychop`` is a Python library designed for efficient quantization, enabling the conversion of single- or double-precision numbers into low-bitwidth representations. It allows users to define custom floating-point formats with a specified number of exponent and significand bits as well as fixed-point and integer quantization, offering fine-grained control over precision and range. The library supports multiple rounding modes, optional denormal number handling, and runs efficiently on both CPU and GPU devices. This makes it particularly useful for research, experimentation, and optimization in areas like machine learning, numerical analysis, and hardware design, where reduced precision can provide computational advantages.
``Pychop`` stands out for its versatility, efficiency, and ease of integration with NumPy, PyTorch, and JAX. Its key strengths—customizability, hardware independence, GPU support, and comprehensive rounding options—make it a valuable tool for both practical applications and theoretical exploration in numerical computing. By emulating low-precision formats within a high-precision environment (single or double), ``Pychop`` allows users to analyze quantization effects without requiring specialized hardware. The library supports both deterministic and stochastic rounding strategies and is optimized for vectorized operations with NumPy arrays, PyTorch tensors, and JAX arrays.
## Install
The proper running environment of ``Pychop`` should by Python 3, which relies on the following dependencies: python > 3.8, numpy >=1.7.3, pandas >=2.0, torch, jax.
To install the current current release via PIP manager use:
```Python
pip install pychop
```
Besides, one can install `pychop` from the `conda-forge` channel can be achieved by adding `conda-forge` to your channels with:
```
conda config --add channels conda-forge
conda config --set channel_priority strict
```
Once the `conda-forge` channel has been enabled, `pychop` can be installed with `conda`:
```
conda install pychop
```
or with `mamba`:
```
mamba install pychop
```
It is possible to list all of the versions of `pychop` available on your platform with `conda`:
```
conda search pychop --channel conda-forge
```
or with `mamba`:
```
mamba search pychop --channel conda-forge
```
## Features
The ``Pychop`` class offers several key advantages that make it a powerful tool for developers, researchers, and engineers working with numerical computations:
* Customizable Precision
* Multiple Rounding Modes
* Hardware-Independent Simulation
* Support for Denormal Numbers
* GPU Acceleration
* Reproducible Stochastic Rounding
* Ease of Integration
* Error Detection
* Soft error simulation
### The supported floating point formats
The supported floating point arithmetic formats include:
| format | description | bits |
| ------------- | ------------- | ------------- |
| 'q43', 'fp8-e4m3' | NVIDIA quarter precision | 4 exponent bits, 3 significand bits |
| 'q52', 'fp8-e5m2' | NVIDIA quarter precision | 5 exponent bits, 2 significand bits |
| 'b', 'bfloat16' | bfloat16 | 8 exponent bits, 7 significand bits |
| 't', 'tf32' | TensorFloat-32 | 8 exponent bits, 10 significand bits |
| 'h', 'half', 'fp16' | IEEE half precision | 5 exponent bits, 10 significand bits |
| 's', 'single', 'fp32' | IEEE single precision | 8 exponent bits, 23 significand bits |
| 'd', 'double', 'fp64' | IEEE double precision | 11 exponent bits, 52 significand bits |
| 'c', 'custom' | custom format | - - |
``Pychop`` support arbitrarily built-in reduced-precision types for scalar, array, and tensor. See here for detail [doc](https://pychop.readthedocs.io/en/latest/builtin.html). A simple example for scalar is as follows:
```python
from pychop import Chop
from pychop.builtin import CPFloat
half = Chop(exp_bits=5, sig_bits=10, subnormal=True, rmode=1)
a = CPFloat(1.234567, half)
b = CPFloat(0.987654, half)
print(a) # CPFloat(1.23438, prec=half)
c = a + b # stays a CPFloat, chopped
print(c) # CPFloat(2.22203, prec=half)
d = a * b / 2.0
print(d) # CPFloat(0.609863, prec=half)
# mixed with a normal Python float
e = a + 3.14
print(e) # CPFloat(4.37438, prec=half)
```
### Examples
We will go through the main functionality of ``Pychop``; for details refer to the documentation.
#### (I). Floating point quantization
Users can specify the number of exponent (exp_bits) and significand (sig_bits) bits, enabling precise control over the trade-off between range and precision.
For example, setting exp_bits=5 and sig_bits=4 creates a compact 10-bit format (1 sign, 5 exponent, 4 significand), ideal for testing minimal precision scenarios.
Rounding the values with specified precision format:
``Pychop`` supports faster low-precision floating point quantization and also enables GPU emulation (simply move the input to GPU device), with different rounding functions:
```Python
import pychop
from pychop import Chop
import numpy as np
np.random.seed(0)
X = np.random.randn(5000, 5000)
pychop.backend('numpy', 1) # Specify different backends, e.g., jax and torch
# One can also specify 'auto', the pychop will automatically detect the types,
# but speed will be degraded.
ch = Chop(exp_bits=5, sig_bits=10, rmode=3) # half precision
X_q = ch(X)
print(X_q[:10, 0])
```
If one is not seeking optimized performance and more emulation supports, one can use the following example.
``Pychop`` also provides same functionalities just like Higham's chop [1] that supports soft error simulation (by setting ``flip=True``), but with relatively degraded speed:
```Python
from pychop import FaultChop
ch = FaultChop('h') # Standard IEEE 754 half precision
X_q = ch(X) # Rounding values
```
One can also customize the precision via:
```Python
from pychop import Customs
from pychop import FaultChop
pychop.backend('numpy', 1)
ct1 = Customs(exp_bits=5, sig_bits=10) # half precision (5 exponent bits, 10+(1) significand bits, (1) is implicit bits)
ch = FaultChop(customs=ct1, rmode=3) # Round towards minus infinity
X_q = ch(X)
print(X_q[:10, 0])
ct2 = Customs(emax=15, t=11)
ch = FaultChop(customs=ct2, rmode=3)
X_q = ch(X)
print(X_q[:10, 0])
```
To enable quantization aware training, a sequential neural network can be built with derived quantied layer (seamlessly integrated with Straight-Through Estimator):
```Python
import torch.nn as nn
from pychop.layers import *
class MLP(nn.Module):
def __init__(self, chop=None):
super(MLP, self).__init__()
self.flatten = nn.Flatten()
self.fc1 = QuantizedLinear(256, 256, chop=chop)
self.relu1 = nn.ReLU()
self.dropout = nn.Dropout(0.2)
self.fc2 = QuantizedLinear(256, 10, chop=chop)
# 5 exponent bits, 10 explicit significant bits , round to nearest ties to even
def forward(self, x):
x = self.flatten(x)
x = self.fc1(x)
x = self.relu1(x)
x = self.dropout(x)
x = self.fc2(x)
return x
```
To enable quantization-aware training, one need to pass floating-point chopper ``ChopSTE`` or fixed-point chopper ``ChopfSTE`` to the parameter ``chop``, for details of example. we refer to [example_CNN_ft.py](examples/example_CNN_ft.py) and [example_CNN_fp.py](examples/example_CNN_fp.py)
For integer quantization, please see [example_CNN_int.py](examples/example_CNN_int.py).
#### (II). Fixed point quantization
Similar to floating point quantization, one can set the corresponding backend. The dominant parameters are ibits and fbits, which are the bitwidths of the integer part and the fractional part, respectively.
```Python
pychop.backend('numpy')
from pychop import Chopf
ch = Chopf(ibits=4, fbits=4)
X_q = ch(X)
```
The code example can be found on the [guidance1](https://github.com/chenxinye/pychop/example/guidance1.ipynb) and [guidance2](https://github.com/chenxinye/pychop/example/guidance2.ipynb).
#### (III). Integer quantization
Integer quantization is another important feature of pychop. It intention is to convert the floating point number into a low bit-width integer, which speeds up the computations in certain computing hardware. It performs quantization with user-defined bitwidths. The following example illustrates the usage of the method.
The integer arithmetic emulation of ``Pychop`` is implemented by the interface Chopi. It can be used in many circumstances, and offers flexible options for users, such as symmetric or unsymmetric quantization and the number of bits to use. The usage is illustrated as below:
```Python
import numpy as np
from pychop import Chopi
pychop.backend('numpy')
X = np.array([[0.1, -0.2], [0.3, 0.4]])
ch = Chopi(bits=8, symmetric=False)
X_q = ch.quantize(X) # Convert to integers
X_dq = ch.dequantize(X_q) # Convert back to floating points
```
### Call in MATlAB
If you use Python virtual environments in MATLAB, ensure MATLAB detects it:
```MATLAB
pe = pyenv('Version', 'your_env\python.exe'); % or simply pe = pyenv();
```
To use ``Pychop`` in your MATLAB environment, similarly, simply load the Pychop module:
```MATLAB
pc = py.importlib.import_module('pychop');
ch = pc.Chop(exp_bits=5, sig_bits=10, rmode=1)
X = rand(100, 100);
X_q = ch(X);
```
Or more specifically, use
```MATLAB
np = py.importlib.import_module('numpy');
pc = py.importlib.import_module('pychop');
ch = pc.Chop(exp_bits=5, sig_bits=10, rmode=1)
X = np.random.randn(int32(100), int32(100));
X_q = ch(X);
```
### Use Cases
* Machine Learning: Test the impact of low-precision arithmetic on model accuracy and training stability, especially for resource-constrained environments like edge devices.
* Hardware Design: Simulate custom floating-point units before hardware implementation, optimizing bit allocations for specific applications.
* Numerical Analysis: Investigate quantization errors and numerical stability in scientific computations.
* Education: Teach concepts of floating-point representation, rounding, and denormal numbers with a hands-on, customizable tool.
## Contributing
Our software is licensed under [](https://opensource.org/licenses/MIT). We welcome contributions in any form! Assistance with documentation is always welcome. To contribute, feel free to open an issue or please fork the project make your changes and submit a pull request. We will do our best to work through any issues and requests.
## Acknowledgement
This project is supported by the European Union (ERC, [InEXASCALE](https://www.karlin.mff.cuni.cz/~carson/inexascale), 101075632). Views and opinions
expressed are those of the authors only and do not necessarily reflect those of the European
Union or the European Research Council. Neither the European Union nor the granting
authority can be held responsible for them.
## Citations
If you use ``Pychop`` in your research or simulations, cite:
```bibtex
@misc{carson2025,
title={Pychop: Emulating Low-Precision Arithmetic in Numerical Methods and Neural Networks},
author={Erin Carson and Xinye Chen},
year={2025},
eprint={2504.07835},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.07835},
}
```
### References
[1] Nicholas J. Higham and Srikara Pranesh, Simulating Low Precision Floating-Point Arithmetic, SIAM J. Sci. Comput., 2019.
[2] IEEE Standard for Floating-Point Arithmetic, IEEE Std 754-2019 (revision of IEEE Std 754-2008), IEEE, 2019.
[3] Intel Corporation, BFLOAT16---hardware numerics definition, 2018
[4] Muller, Jean-Michel et al., Handbook of Floating-Point Arithmetic, Springer, 2018
[jax_link]: https://github.com/google/jax
[jax_badge_link]: https://img.shields.io/badge/JAX-Accelerated-9cf.svg?style=flat-square&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC0AAAAaCAYAAAAjZdWPAAAIx0lEQVR42rWWBVQbWxOAkefur%2B7u3les7u7F3ZIQ3N2tbng8aXFC0uAuKf2hmlJ3AapIgobMv7t0w%2Ba50JzzJdlhlvNldubeq%2FY%2BXrTS1z%2B6sttrKfQOOY4ns13ecFImb47pVvIkukNe4y3Junr1kSZ%2Bb3Na248tx7rKiHlPo6Ryse%2F11NKQuk%2FV3tfL52yHtXm8TGYS1wk4J093wrPQPngRJH9HH1x2fAjMhcIeIaXKQCmd2Gn7IqSvG83BueT0CMkTyESUqm3vRRggTdOBIb1HFDaNl8Gdg91AFGkO7QXe8gJInpoDjEXC9gbhtWH3rjZ%2F9yK6t42Y9zyiC1iLhZA8JQe4eqKXklrJF0MqfPv2bc2wzPZjpnEyMEVlEZCKQzYCJhE8QEtIL1RaXEVFEGmEaTn96VuLDzWflLFbgvqUec3BPVBmeBnNwUiakq1I31UcPaTSR8%2B1LnditsscaB2A48K6D9SoZDD2O6bELvA0JGhl4zIYZzcWtD%2BMfdvdHNsDOHciXwBPN18lj7sy79qQCTNK3nxBZXakqbZFO2jHskA7zBs%2BJhmDmr0RhoadIZjYxKIVHpCZngPMZUKoQKrfEoz1PfZZdKAe2CvP4XnYE8k2LLMdMumwrLaNlomyVqK0UdwN%2BD7AAz73dYBpPg6gPiCN8TXFHCI2s7AWYesJgTabD%2FS5uXDTuwVaAvvghncTdk1DYGkL0daAs%2BsLiutLrn0%2BRMNXpunC7mgkCpshfbw4OhrUvMkYo%2F0c4XtHS1waY4mlG6To8oG1TKjs78xV5fAkSgqcZSL0GoszfxEAW0fUludRNWlIhGsljzVjctr8rJOkCpskKaDYIlgkVoCmF0kp%2FbW%2FU%2F%2B8QNdXPztbAc4kFxIEmNGwKuI9y5gnBMH%2BakiZxlfGaLP48kyj4qPFkeIPh0Q6lt861zZF%2BgBpDcAxT3gEOjGxMDLQRSn9XaDzPWdOstkEN7uez6jmgLOYilR7NkFwLh%2B4G0SQMnMwRp8jaCrwEs8eEmFW2VsNd07HQdP4TgWxNTYcFcKHPhRYFOWLfJJBE5FefTQsWiKRaOw6FBr6ob1RP3EoqdbHsWFDwAYvaVI28DaK8AHs51tU%2BA3Z8CUXvZ1jnSR7SRS2SnwKw4O8B1rCjwrjgt1gSrjXnWhBxjD0Hidm4vfj3e3riUP5PcUCYlZxsYFDK41XnLlUANwVeeILFde%2BGKLhk3zgyZNeQjcSHPMEKSyPPQKfIcKfIqCf8yN95MGZZ1bj98WJ%2BOorQzxsPqcYdX9orw8420jBQNfJVVmTOStEUqFz5dq%2F2tHUY3LbjMh0qYxCwCGxRep8%2FK4ZnldzuUkjJLPDhkzrUFBoHYBjk3odtNMYoJVGx9BG2JTNVehksmRaGUwMbYQITk3Xw9gOxbNoGaA8RWjwuQdsXdGvpdty7Su2%2Fqn0qbzWsXYp0nqVpet0O6zzugva1MZHUdwHk9G8aH7raHua9AIxzzjxDaw4w4cpvEQlM84kwdI0hkpsPpcOtUeaVM8hQT2Qtb4ckUbaYw4fXzGAqSVEd8CGpqamj%2F9Q2pPX7miW0NlHlDE81AxLSI2wyK6xf6vfrcgEwb0PAtPaHM1%2BNXzGXAlMRcUIrMpiE6%2Bxv0cyxSrC6FmjzvkWJE3OxpY%2BzmpsANFBxK6RuIJvXe7bUHNd4zfCwvPPh9unSO%2BbIL2JY53QDqvdbsEi2%2BuwEEHPsfFRdOqjHcjTaCLmWdBewtKzHEwKZynSGgtTaSqx7dwMeBLRhR1LETDhu76vgTFfMLi8zc8F7hoRPpAYjAWCp0Jy5dzfSEfltGU6M9oVCIATnPoGKImDUJNfK0JS37QTc9yY7eDKzIX5wR4wN8RTya4jETAvZDCmFeEPwhNXoOlQt5JnRzqhxLZBpY%2BT5mZD3M4MfLnDW6U%2Fy6jkaDXtysDm8vjxY%2FXYnLebkelXaQtSSge2IhBj9kjMLF41duDUNRiDLHEzfaigsoxRzWG6B0kZ2%2BoRA3dD2lRa44ZrM%2FBW5ANziVApGLaKCYucXOCEdhoew5Y%2Btu65VwJqxUC1j4lav6UwpIJfnRswQUIMawPSr2LGp6WwLDYJ2TwoMNbf6Tdni%2FEuNvAdEvuUZAwFERLVXg7pg9xt1djZgqV7DmuHFGQI9Sje2A9dR%2FFDd0osztIRYnln1hdW1dff%2B1gtNLN1u0ViZy9BBlu%2BzBNUK%2BrIaP9Nla2TG%2BETHwq2kXzmS4XxXmSVan9KMYUprrbgFJqCndyIw9fgdh8dMvzIiW0sngbxoGlniN6LffruTEIGE9khBw5T2FDmWlTYqrnEPa7aF%2FYYcPYiUE48Ul5jhP82tj%2FiESyJilCeLdQRpod6No3xJNNHeZBpOBsiAzm5rg2dBZYSyH9Hob0EOFqqh3vWOuHbFR5eXcORp4OzwTUA4rUzVfJ4q%2FIa1GzCrzjOMxQr5uqLAWUOwgaHOphrgF0r2epYh%2FytdjBmUAurfM6CxruT3Ee%2BDv2%2FHAwK4RUIPskqK%2Fw4%2FR1F1bWfHjbNiXcYl6RwGJcMOMdXZaEVxCutSN1SGLMx3JfzCdlU8THZFFC%2BJJuB2964wSGdmq3I2FEcpWYVfHm4jmXd%2BRn7agFn9oFaWGYhBmJs5v5a0LZUjc3Sr4Ep%2FmFYlX8OdLlFYidM%2B731v7Ly4lfu85l3SSMTAcd5Bg2Sl%2FIHBm3RuacVx%2BrHpFcWjxztavOcOBcTnUhwekkGlsfWEt2%2FkHflB7WqKomGvs9F62l7a%2BRKQQQtRBD9VIlZiLEfRBRfQEmDb32cFQcSjznUP3um%2FkcbV%2BjmNEvqhOQuonjoQh7QF%2BbK811rduN5G6ICLD%2BnmPbi0ur2hrDLKhQYiwRdQrvKjcp%2F%2BL%2BnTz%2Fa4FgvmakvluPMMxbL15Dq
| text/markdown | null | Xinye Chen <xinyechenai@gmail.com>, Erin Carson <erinccarson@gmail.com> | null | Erin Carson <erinccarson@gmail.com>, Xinye Chen <xinyechenai@gmail.com> | MIT License | floating-point, low-precision, simulation, numerical | [
"Intended Audience :: Science/Research",
"Intended Audience :: Developers",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Pyth... | [] | null | null | >=3.8 | [] | [] | [] | [
"numpy>=1.17.2",
"pandas",
"dask[array]",
"torch>=1.12",
"jax>=0.4.8",
"jaxlib>=0.4.7"
] | [] | [] | [] | [
"Homepage, https://github.com/nla-group/pychop"
] | twine/6.1.0 CPython/3.12.11 | 2026-02-21T05:26:45.346048 | pychop-0.4.5.tar.gz | 116,269 | 58/ea/2b7518dc0a1d2a0d1f4dca4e80b5311187b756c098f21c393a35087ca04e/pychop-0.4.5.tar.gz | source | sdist | null | false | 7a7375c1eb3265b6e5689824af860868 | 9f3240c05a62f1208d2c0c13183183bee7b573d0e296f21104cdae3b1b28bdf5 | 58ea2b7518dc0a1d2a0d1f4dca4e80b5311187b756c098f21c393a35087ca04e | null | [
"LICENSE"
] | 184 |
2.4 | ciris-verify | 0.6.3 | Python bindings for CIRISVerify hardware-rooted license verification | # CIRISVerify Python Bindings
Python bindings for CIRISVerify, the hardware-rooted license verification module for the CIRIS ecosystem.
## Installation
```bash
pip install ciris-verify
```
**Note:** The CIRISVerify binary must be installed separately. See the [CIRISVerify documentation](https://github.com/CIRISAI/CIRISVerify) for installation instructions.
## Quick Start
```python
import os
from ciris_verify import CIRISVerify, LicenseStatus
# Initialize the verifier
verifier = CIRISVerify()
# Get license status with a fresh nonce
status = await verifier.get_license_status(
challenge_nonce=os.urandom(32)
)
# Check if professional capabilities are available
if status.allows_licensed_operation():
print("Professional license verified!")
print(f"Tier: {status.license.tier}")
print(f"Capabilities: {status.license.capabilities}")
else:
print("Running in community mode")
# IMPORTANT: Always display the mandatory disclosure
print(status.mandatory_disclosure.text)
```
## Mandatory Disclosure
Per the CIRIS ecosystem rules, agents **MUST** display the `mandatory_disclosure.text` to users. This ensures transparency about the agent's capabilities and licensing status.
```python
# The disclosure MUST be shown to users
disclosure = status.mandatory_disclosure
print(f"[{disclosure.severity.upper()}] {disclosure.text}")
```
## Capability Checking
For frequent capability checks, use the fast path:
```python
result = await verifier.check_capability("medical:diagnosis")
if result.allowed:
# Capability is available
pass
else:
print(f"Capability denied: {result.reason}")
```
## Testing
For testing without the actual binary, use `MockCIRISVerify`:
```python
from ciris_verify import MockCIRISVerify, LicenseStatus
# Create a mock that returns community mode
verifier = MockCIRISVerify(
mock_status=LicenseStatus.UNLICENSED_COMMUNITY
)
# Use exactly like the real client
status = await verifier.get_license_status(os.urandom(32))
assert status.status == LicenseStatus.UNLICENSED_COMMUNITY
```
## Error Handling
```python
from ciris_verify import (
CIRISVerifyError,
BinaryNotFoundError,
BinaryTamperedError,
VerificationFailedError,
)
try:
verifier = CIRISVerify()
status = await verifier.get_license_status(os.urandom(32))
except BinaryNotFoundError as e:
# Binary not installed
print(f"CIRISVerify not found: {e.path}")
except BinaryTamperedError:
# CRITICAL: Binary has been modified
# Halt all operations immediately
raise SystemExit("SECURITY ALERT: Binary integrity compromised")
except VerificationFailedError as e:
# Verification failed - operate in restricted mode
print(f"Verification failed: {e}")
```
## License Status Codes
| Status | Code | Description |
|--------|------|-------------|
| `LICENSED_PROFESSIONAL` | 100 | Full professional license active |
| `LICENSED_PROFESSIONAL_GRACE` | 101 | License valid, in offline grace period |
| `UNLICENSED_COMMUNITY` | 200 | Community mode, no professional capabilities |
| `RESTRICTED_*` | 300-399 | Restricted mode due to verification issues |
| `ERROR_*` | 400-499 | Error states (revoked, expired, etc.) |
| `LOCKDOWN_*` | 500+ | Critical security failure, halt operations |
## Thread Safety
The client is thread-safe and can be used from multiple threads or async tasks concurrently.
## License
AGPL-3.0-or-later - See LICENSE file in the CIRISVerify repository.
| text/markdown | null | CIRIS Engineering <engineering@ciris.ai> | null | null | null | null | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"Operating System :: POSIX :: Linux",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",... | [] | null | null | >=3.10 | [] | [] | [] | [
"pydantic>=2.0.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/CIRISAI/CIRISVerify",
"Repository, https://github.com/CIRISAI/CIRISVerify",
"Documentation, https://github.com/CIRISAI/CIRISVerify/tree/main/bindings/python",
"Issues, https://github.com/CIRISAI/CIRISVerify/issues"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:26:41.010006 | ciris_verify-0.6.3.tar.gz | 21,227 | 68/7f/6a7e3867e73dc8ff65e4cece0eface99a41ca8b26056726f1776e253c3df/ciris_verify-0.6.3.tar.gz | source | sdist | null | false | d46b2610795f18415e480a2a31c2940d | 1e45bc693619d8ed963f2f2f2afa8dafc514c944d48b17ff0cd459dc05fd3440 | 687f6a7e3867e73dc8ff65e4cece0eface99a41ca8b26056726f1776e253c3df | AGPL-3.0-or-later | [] | 472 |
2.1 | quant-kernel | 2.9.0 | High-performance derivative pricing engine with 40+ algorithms | # QuantKernel
QuantKernel is a C++17 quantitative pricing kernel with Python bindings.
It focuses on fast scalar and batch option analytics across closed-form, lattice,
finite-difference, Monte Carlo, Fourier, quadrature, regression-approximation,
Greek-estimation, and ML-inspired methods.
Linux, macOS, and Windows are supported.
## Scope
QuantKernel provides:
- C++ shared library (`libquantkernel.so` / `libquantkernel.dylib` / `libquantkernel.dll`) with C ABI exports.
- Python package (`quantkernel`) with scalar and batch methods.
- Optional Python-level accelerator (`QuantAccelerator`) for backend selection (`auto`, `cpu`, `gpu`).
## Install (Not available yet, more test needed)
End users (recommended):
```bash
python -m pip install --upgrade pip
python -m pip install quant-kernel
```
This installs a prebuilt wheel on supported platforms and does not require local C++ compilation.
## Implemented Algorithm Families
### Closed-form / Semi-analytical
- Black-Scholes-Merton
- Black-76
- Bachelier
- Heston characteristic-function pricing
- Merton jump-diffusion
- Variance-Gamma characteristic-function pricing
- SABR (Hagan lognormal IV + Black-76 pricing)
- Dupire local volatility inversion
### Tree / Lattice
- CRR
- Jarrow-Rudd
- Tian
- Leisen-Reimer
- Trinomial tree
- Derman-Kani style local-vol tree entrypoints:
- Constant local vol surface (`derman_kani_const_local_vol_price`)
- Vanilla call-surface driven entrypoint (`derman_kani_call_surface_price`)
### Finite Difference
- Explicit FD
- Implicit FD
- Crank-Nicolson
- ADI (Douglas, Craig-Sneyd, Hundsdorfer-Verwer)
- PSOR
### Monte Carlo
- Standard Monte Carlo
- Euler-Maruyama
- Milstein
- Longstaff-Schwartz
- Quasi Monte Carlo (Sobol, Halton)
- Multilevel Monte Carlo
- Importance Sampling
- Control Variates
- Antithetic Variates
- Stratified Sampling
### Fourier Transform Methods
- Carr-Madan FFT
- COS (Fang-Oosterlee)
- Fractional FFT
- Lewis Fourier inversion
- Hilbert transform pricing
### Integral Quadrature
- Gauss-Hermite
- Gauss-Laguerre
- Gauss-Legendre
- Adaptive quadrature
### Regression Approximation
- Polynomial Chaos Expansion
- Radial Basis Functions
- Sparse Grid Collocation
- Proper Orthogonal Decomposition
### Greeks / Adjoint Methods
- Pathwise derivative delta
- Likelihood ratio delta
- AAD delta
### Machine-learning Inspired Pricing
- Deep BSDE
- PINNs
- Deep Hedging
- Neural SDE calibration
## Repository Layout
- `cpp/`
- `include/quantkernel/qk_api.h`: C API declarations
- `src/`: implementations and API bridge (`qk_api.cpp`)
- `tests/`: C++ test executables
- `python/`
- `quantkernel/`: Python API (`QuantKernel`, `QuantAccelerator`)
- `tests/`: pytest suite
- `examples/`: usage and benchmark scripts
- `Makefile`: common build/test commands
## Requirements
- CMake >= 3.14
- C++17 compiler
- Python >= 3.11
- NumPy
Optional:
- CuPy (for GPU backend in accelerator paths)
## Build
From project root:
```bash
python3 -m venv .venv
source .venv/bin/activate
python3 -m pip install --upgrade pip
python3 -m pip install -r requirements.txt
cmake -S . -B build
cmake --build build -j
```
## Python Setup (from source checkout)
Point Python to the package and shared library:
```bash
export PYTHONPATH=$PWD/python
export QK_LIB_PATH=$PWD/build/cpp
```
Then use:
```python
from quantkernel import QuantKernel, QK_CALL
qk = QuantKernel()
price = qk.black_scholes_merton_price(
100.0, 100.0, 1.0, 0.2, 0.03, 0.01, QK_CALL
)
print(price)
```
## Batch Usage
```python
import numpy as np
from quantkernel import QuantKernel, QK_CALL, QK_PUT
qk = QuantKernel()
n = 100_000
rng = np.random.default_rng(42)
spot = rng.uniform(80.0, 120.0, n)
strike = rng.uniform(80.0, 120.0, n)
t = rng.uniform(0.25, 2.0, n)
vol = rng.uniform(0.1, 0.6, n)
r = rng.uniform(0.0, 0.08, n)
q = rng.uniform(0.0, 0.04, n)
option_type = np.where((np.arange(n) & 1) == 0, QK_CALL, QK_PUT).astype(np.int32)
prices = qk.black_scholes_merton_price_batch(spot, strike, t, vol, r, q, option_type)
print(prices[:3])
```
## Derman-Kani Call-Surface API (Python)
`derman_kani_call_surface_price` accepts:
- `surface_strikes`: 1D strikes
- `surface_maturities`: 1D maturities
- `surface_call_prices`:
- 2D array with shape `(len(surface_maturities), len(surface_strikes))`, or
- flattened 1D array of that size
Example:
```python
from quantkernel import QuantKernel, QK_CALL
qk = QuantKernel()
spot, r, q = 100.0, 0.03, 0.01
surface_strikes = [80, 90, 100, 110, 120]
surface_maturities = [0.5, 1.0, 1.5]
# Synthetic surface here; in production use observed call prices.
surface_call_prices = [
[qk.black_scholes_merton_price(spot, k, tau, 0.2, r, q, QK_CALL) for k in surface_strikes]
for tau in surface_maturities
]
price = qk.derman_kani_call_surface_price(
spot=spot,
strike=100.0,
t=1.0,
r=r,
q=q,
option_type=QK_CALL,
surface_strikes=surface_strikes,
surface_maturities=surface_maturities,
surface_call_prices=surface_call_prices,
steps=20,
)
print(price)
```
If `QK_LIB_PATH` is unset, the package also searches for a bundled shared library from an installed wheel.
## Testing
From project root:
```bash
make test-cpp
make test-py
# or
make quick
```
Direct commands:
```bash
ctest --test-dir build --output-on-failure
PYTHONPATH=python QK_LIB_PATH=build/cpp pytest -q python/tests
```
## Benchmark
```bash
PYTHONPATH=python QK_LIB_PATH=build/cpp \
python3 python/examples/benchmark_scalar_batch_cpp.py --n 50000 --repeats 3
```
## Error Handling
### C API
- Batch functions return ABI error codes (`QK_OK`, `QK_ERR_NULL_PTR`, `QK_ERR_BAD_SIZE`, `QK_ERR_INVALID_INPUT`, etc.).
- Use `qk_get_last_error()` for thread-local error detail.
### Python API
- Raises typed exceptions:
- `QKError`
- `QKNullPointerError`
- `QKBadSizeError`
- `QKInvalidInputError`
## License
`LICENSE` (WTFPL).
| text/markdown | QuantKernel Contributors | null | null | null | DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO. | quant, options, pricing, derivatives, finance, hpc | [
"Development Status :: 4 - Beta",
"Intended Audience :: Financial and Insurance Industry",
"Intended Audience :: Science/Research",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: C++",
"Programming Language :: Python :: 3",
"Programming Language :: Python :... | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy>=1.24",
"build>=1.2.2; extra == \"dev\"",
"cibuildwheel>=2.23.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"cupy>=12.0; extra == \"gpu\""
] | [] | [] | [] | [
"Homepage, https://github.com/yluoc/Quant-Kernel",
"Repository, https://github.com/yluoc/Quant-Kernel",
"Issues, https://github.com/yluoc/Quant-Kernel/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:26:05.151120 | quant_kernel-2.9.0.tar.gz | 139,775 | 6b/2b/01e5f1997f279fbc4c0057528cce5ec5d201e9e9aad2d7056788ac2395e8/quant_kernel-2.9.0.tar.gz | source | sdist | null | false | 7292e4f4e38262a4a99ea25f311f0ae6 | 1b6b43c076a55552772d297bb2dc3cfad837944f099f7c1dcee00de5a90bb692 | 6b2b01e5f1997f279fbc4c0057528cce5ec5d201e9e9aad2d7056788ac2395e8 | null | [] | 511 |
2.4 | secretzero | 0.1.2 | A secrets orchestration, lifecycle, and bootstrap engine for code repositories | # SecretZero
SecretZero is a secrets orchestration, lifecycle, and bootstrap engine.
<img
style="display: block;
margin-left: auto;
margin-right: auto;
width: 80%;"
src="./docs/inc/secret0_angel_small.png"
alt="Secret0 Logo">
</img>
SecretZero is a secrets as code management tool that automates the creation, seeding, and lifecycle management of project secrets through self-documenting declarative manifests. The very first secrets you seed for a new project or environment (known in the industry as 'secret-zero') become timestamped, rotatable, and maintainable with git-compatible lock files.
## The Problem
If you have ever asked any of these questions about a new or existing codebase then SecretZero is for you!
- Where are all the secrets in my project?
- How do I generate new secrets, api keys, or certificates to deploy a whole new environment?
- How do I handle secret-zero?
- When were my critical project secrets last rotated?
- If I needed to bootstrap this entire project from scratch would I be able to do so without manually handling any secrets?
- How do I document my project's secrets surface area and requirements?
## Features
### Core Capabilities
- **Idempotent bootstrap** of initial secrets for one or more environments
- **Lockfile tracking** for secrets with rotation history and timestamps
- **Dual-purpose providers** that can both request/rotate new secrets and store them across a variety of environments
- **Type safety and validation** at every layer with strongly-typed Pydantic models
- **Multiple profiles** for targeting multiple environments independently
- **Manual secret fallbacks** via environment variables when automatic generation isn't possible
- **Self-documenting** secrets-as-code showing when secrets were created, from where, and where they are now
### Phase 6: Advanced Features (NEW)
- **Secret Rotation Policies** - Automated rotation based on configurable time periods (90d, 2w, etc.)
- **Policy Enforcement** - Validate secrets against rotation, compliance, and access control policies
- **Compliance Support** - Built-in SOC2 and ISO27001 compliance policies
- **Drift Detection** - Detect when secrets have been modified outside of SecretZero's control
- **Rotation Tracking** - Track rotation history, count, and last rotation timestamp in lockfile
- **One-time Secrets** - Support for secrets that should only be generated once
### Phase 7: API Service (NEW)
- **REST API** - FastAPI-based HTTP API for programmatic secret management
- **OpenAPI Documentation** - Interactive API docs with Swagger UI and ReDoc
- **API Authentication** - Secure API key-based authentication
- **Audit Logging** - Comprehensive audit trail for all API operations
- **Remote Management** - Manage secrets from CI/CD pipelines, scripts, or applications
### CLI Commands
```bash
# Initialize and validate
secretzero create # Create new Secretfile from template
secretzero init # Check and install provider dependencies
secretzero validate # Validate Secretfile configuration
secretzero test # Test provider connectivity
# Secret management
secretzero sync # Generate and sync secrets to targets
secretzero sync --dry-run # Preview changes without applying
secretzero sync -s db_password # Sync only specific secret(s)
secretzero show <secret> # Show secret metadata
# Visualization
secretzero graph # Generate visual flow diagram
secretzero graph --type detailed # Show detailed configuration
secretzero graph --type architecture # Show system architecture
secretzero graph --format terminal # Text-based summary
secretzero graph --output diagram.md # Save to file
# Rotation and policies (Phase 6)
secretzero rotate # Rotate secrets based on policies
secretzero rotate --dry-run # Preview rotation status
secretzero rotate --force # Force rotation even if not due
secretzero policy # Check policy compliance
secretzero drift # Detect drift in secrets
# Provider management
secretzero providers list # List available providers
secretzero providers capabilities # Show provider capabilities
secretzero providers token-info # Show GitHub token permissions
secretzero providers token-info --provider github # Explicit provider
# API Server (Phase 7)
secretzero-api # Start REST API server
```
### API Endpoints
```bash
# Health and documentation
GET / # API info
GET /health # Health check
GET /docs # Interactive Swagger UI
GET /redoc # ReDoc documentation
# Secret management
GET /secrets # List all secrets
GET /secrets/{name}/status # Get secret status
POST /sync # Sync/generate secrets
POST /config/validate # Validate configuration
# Rotation and policies
POST /rotation/check # Check rotation status
POST /rotation/execute # Execute rotation
POST /policy/check # Check policy compliance
POST /drift/check # Check for drift
# Audit and monitoring
GET /audit/logs # Get audit logs
```
## How It Works
At its core SecretZero is a declarative manifest that defines your secret usage in a project then does its very best to help you request and seed your secrets. It is like a package dependency list but for your secrets. SecretZero processes a simple declarative configuration file in your project that lays out where your secrets come from and where they need to go.
A user with all the correctly authenticated providers can run 'secretzero sync' to bootstrap the environment from scratch. SecretZero will validate if the environment has already been bootstrapped (lockfile) and attempt to automate requesting and storing them. In it's simplest form, you can use the local system to generate random passwords for you then store them into AWS Secrets Manager, A local .env file, Azure KeyVault, Kubernetes Secret, or Vault KV store.
SecretZero is composed of providers and secret types. Providers determine where you can request secrets from and how they can be stored. These are typically associated with an authenticated platform such as AWS, Azure, HashiCorp Vault, or external API endpoints. Secret types are expected secret formats such as generic random passwords, api tokens, database credentials, ssh keys, certificates, or more. Below is an ideal workflow without the details of authentication.
```mermaid
sequenceDiagram
participant User
participant SecretZero
participant Vault as HashiCorp Vault
participant LocalFS as Local Filesystem
participant VaultKV as Vault KV Store
User->>SecretZero: secretzero sync
SecretZero->>Vault: Request random password generation
Vault-->>SecretZero: Generated password
SecretZero->>LocalFS: Write to .env file (template provider)
SecretZero->>VaultKV: Store secret at kv/path
SecretZero->>User: ✓ Secret synced to 2 targets
```
Here is a more complex example workflow for a randomly generated initial database credential that then gets stored in a local .env file, aws secret manager secret, and Hashicorp Vault KV store:
```mermaid
graph LR
Source[Secret Source<br/>Local Generator]
Secret[Secret Object<br/>postgres-password<br/>type: password]
Target1[Target 1<br/>AWS Secrets Manager<br/>prod/db/postgres]
Target2[Target 2<br/>Local .env File<br/>DATABASE_PASSWORD]
Target3[Target 3<br/>Vault KV Store<br/>kv/prod/postgres]
Source -->|generates| Secret
Secret -->|syncs to| Target1
Secret -->|syncs to| Target2
Secret -->|syncs to| Target3
style Source fill:#e1f5ff
style Secret fill:#fff4e1
style Target1 fill:#e8f5e9
style Target2 fill:#e8f5e9
style Target3 fill:#e8f5e9
```
Here is the sequence of events for a developer that needs to maintain a manually requested API key for their project using SecretZero to help bootstrap and create a lockfile for the process thus tracking and timestamping the process for future rotation.
```mermaid
sequenceDiagram
participant User
participant SecretZero
participant ThirdParty as Third-Party Service
participant GitLab as GitLab CI/CD Variable
participant LockFile as .gitsecrets.lock
User->>SecretZero: secretzero sync
SecretZero->>GitLab: Check authentication status
alt Not authenticated
SecretZero->>User: ✗ Error: GitLab authentication required
Note over SecretZero,User: Cannot proceed without credentials<br/>to write to target, GitLab CICD Variable
else User is authenticated
SecretZero->>LockFile: Check for existing entry
alt Lockfile entry exists
LockFile-->>SecretZero: Secret already synced
SecretZero->>User: ✓ Skipped (already exists in lockfile)
else No lockfile entry
SecretZero->>User: Prompt: Enter API key for 'service-api-key'
User->>ThirdParty: Manually create API key
ThirdParty-->>User: API key value
User->>SecretZero: Paste API key
SecretZero->>GitLab: Store as CI/CD variable
SecretZero->>LockFile: Update with metadata hash
SecretZero->>User: ✓ Secret synced to GitLab + lockfile updated
end
end
```
Here is the sequence of events for a developer that needs to maintain an Azure Application ID credential for their project using SecretZero. Authentication is required to both request the credential via Azure API and store it in Azure Key Vault. If possible SecretZero providers will attempt to automatically request secrets. If that request fails then it fails back to manual prompting.
```mermaid
sequenceDiagram
participant User
participant SecretZero
participant Azure as Azure AD API
participant AzureKV as Azure Key Vault
participant LockFile as .gitsecrets.lock
User->>SecretZero: secretzero sync
SecretZero->>Azure: Check authentication status
alt Not authenticated
SecretZero->>User: ✗ Error: Azure authentication required
Note over SecretZero,User: Cannot proceed without credentials<br/>to write to target, Azure Key Vault
else User is authenticated
SecretZero->>LockFile: Check for existing entry
alt Lockfile entry exists
LockFile-->>SecretZero: Secret already synced
SecretZero->>User: ✓ Skipped (already exists in lockfile)
else No lockfile entry
SecretZero->>Azure: Request new Application ID + client secret
Azure-->>SecretZero: App ID & secret created
SecretZero->>AzureKV: Store credentials
SecretZero->>LockFile: Update with metadata hash
SecretZero->>User: ✓ Secret auto-generated and synced to Azure Key Vault
end
end
```
Here is the sequence of events for a developer that needs to maintain an Azure Application ID credential for their project using SecretZero. Authentication is required to request the credential via Azure API. If Azure authentication fails, SecretZero falls back to manual credential entry. The credentials are then stored as a GitHub CI/CD secret that the user is authenticated against.
```mermaid
sequenceDiagram
participant User
participant SecretZero
participant Azure as Azure AD API
participant GitHub as GitHub Secret
participant LockFile as .gitsecrets.lock
User->>SecretZero: secretzero sync
SecretZero->>LockFile: Check for existing entry
alt Lockfile synced
LockFile-->>SecretZero: Secret already synced
SecretZero->>User: ✓ Skipped (already exists in lockfile)
else Lockfile unsynced
SecretZero->>GitHub: Check authentication status
alt GitHub authenticated
SecretZero->>Azure: Check authentication status
alt Azure authenticated
SecretZero->>Azure: Request new Application ID + client secret
Azure-->>SecretZero: App ID & secret created
else Not authenticated
SecretZero->>User: ⚠ Azure Auth failed (falling back to manual entry)
SecretZero->>User: Prompt: Enter Application ID
User->>SecretZero: Provide App ID
SecretZero->>User: Prompt: Enter client secret
User->>SecretZero: Provide client secret
end
else
SecretZero->>User: ⚠ GitHub Auth failed!
end
SecretZero->>GitHub: Store credentials as CI/CD secret
SecretZero->>LockFile: Update with metadata hash
SecretZero->>User: ✓ Secret synced to GitHub Actions + lockfile updated
end
```
## Auto-Authentication
Not shown in the above workflows are the fact that SecretZero will attempt to also automatically authenticate via OIDC, environment variables, or whichever manner the provider is able to support.
### Checking Provider Permissions
SecretZero can introspect provider authentication tokens to verify they have the necessary permissions:
```bash
# Check GitHub token permissions and scopes
secretzero providers token-info
# Output shows:
# - User information
# - OAuth scopes (repo, workflow, admin:org, etc.)
# - Capabilities (can read repos, write secrets, etc.)
# - Links to documentation on permission requirements
```
This is useful for:
- **Troubleshooting** - Verify token has required scopes before attempting operations
- **Security auditing** - Document what permissions are granted to tokens
- **Compliance** - Ensure tokens follow principle of least privilege
- **Onboarding** - Help new team members create tokens with correct permissions
Currently supported providers: GitHub (more providers coming soon).
## Use Cases
### GitOps-First Infrastructure
Easy to read lockfiles are 100% git friendly. Perfect for teams deploying infrastructure via GitOps where secrets need automated provisioning across multiple environments without manual intervention.
### Multi-Cloud Secret Synchronization
Sync secrets across AWS Secrets Manager, Azure Key Vault, and HashiCorp Vault simultaneously from a single source of truth.
### Database Credential Bootstrapping
Generate and rotate database credentials (PostgreSQL, MySQL, MongoDB) during initial deployment or scheduled rotation cycles.
### Certificate Management
Automate creation and distribution of TLS certificates, SSH keypairs, and signing certificates across development, staging, and production environments.
### CI/CD Secret Provisioning
Bootstrap CI/CD pipeline secrets (GitHub Actions, GitLab CI, Jenkins) from centralized configuration without storing credentials in version control.
### Kubernetes Secret Seeding
Generate and deploy secrets to multiple Kubernetes clusters/namespaces during cluster initialization or application deployment.
- Generate externals secrets operator manifests for target secrets.
### Development Environment Setup
New team members can bootstrap their local `.env` files with production-like secrets in seconds without manual credential sharing.
### Compliance & Audit Requirements
Maintain an auditable lockfile showing when secrets were created, last rotated, and where they're deployed for SOC2/ISO compliance.
### Secret-Zero Problem
Solve the "where do the first secrets come from" challenge when deploying greenfield infrastructure or disaster recovery scenarios.
### API Key Lifecycle Management
Track and rotate third-party API keys (Stripe, SendGrid, Twilio) across multiple services while maintaining synchronization.
### Microservices Secret Coordination
Ensure all microservices receive consistent shared secrets (JWT signing keys, encryption keys) across distributed deployments.
### Environment Parity Testing
Quickly spin up ephemeral test environments with production-like secrets for integration testing without exposing real credentials.
# Components
These are the core components of this application.
## Secrets
Secrets are usually just a text or dict type. In our case we use a schema of allowed values so that we can easily map out a secret type when requesting it from the provider (kinda need to know what you are asking for right?). This is really a contract used for expected data from a provider and then expressed in targets.
> **NOTE** All secrets have a source and at least 1 or more targets.
## Providers
Providers are similar to terraform providers and are often an authentication point granting API access to secret sources or targets.
Secret sources are provider bound. If authentication fails, the user is (optionally) prompted for secrets manually as a failover. This is often necessary if there is a manual request somewhere in your bootstrap process.
## Installation
### Basic Installation
```bash
pip install secretzero
```
### With Provider Support
```bash
# AWS support
pip install secretzero[aws]
# Azure support
pip install secretzero[azure]
# Vault support
pip install secretzero[vault]
# Kubernetes support
pip install secretzero[kubernetes]
# CI/CD support (GitHub, GitLab, Jenkins)
pip install secretzero[cicd]
# API server support
pip install secretzero[api]
# Everything
pip install secretzero[all]
```
## Installation (Development)
```bash
# Clone the repository
git clone https://github.com/zloeber/SecretZero.git
cd SecretZero
# Create virtual environment (include pip and other tools)
uv sync --all-extras
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install in development mode
uv pip install -e ".[dev]"
```
## Quick Start
### CLI Usage
```bash
# List supported secret types
secretzero secret-types
# Show detailed configuration for a specific type
secretzero secret-types --type password --verbose
# Create a new manifest from template
secretzero create --template-type basic
# Validate your manifest
secretzero validate
# Test provider connectivity
secretzero test
# Generate and sync secrets (dry-run)
secretzero sync --dry-run
```
### API Server
```bash
# Install API dependencies
pip install secretzero[api]
# Set API key (optional, enables authentication)
export SECRETZERO_API_KEY=$(python -c "import secrets; print(secrets.token_urlsafe(32))")
# Start server
secretzero-api
# Server runs on http://localhost:8000
# Visit http://localhost:8000/docs for interactive API documentation
```
### API Usage Examples
```bash
# Health check
curl http://localhost:8000/health
# List secrets (with authentication)
curl -H "X-API-Key: $SECRETZERO_API_KEY" http://localhost:8000/secrets
# Sync secrets
curl -X POST -H "X-API-Key: $SECRETZERO_API_KEY" \
-H "Content-Type: application/json" \
http://localhost:8000/sync \
-d '{"dry_run": true, "force": false}'
# Check rotation status
curl -X POST -H "X-API-Key: $SECRETZERO_API_KEY" \
-H "Content-Type: application/json" \
http://localhost:8000/rotation/check \
-d '{}'
```
For more API examples, see [docs/api-getting-started.md](docs/api-getting-started.md).
# Actually create and deploy secrets
secretzero sync
```
## Example Manifest
** See [Secretfile.yml](./Secretfile.yml) **
## Documentation
- **[Docs][./docs]**
- **[Extending SecretZero](./docs/extending.md)** - Guide for adding new secret types and providers
## Security
SecretZero is designed with security as a priority:
- ✅ No plaintext secrets in lock files (only metadata hashes)
- ✅ Schema-driven validation at every layer
- ✅ Type-safe implementations with Pydantic
- ✅ Idempotent operations to prevent accidental overwrites
- ✅ Audit trail through lock file tracking
## License
[Apache](./LICENSE)
# FAQs
## Relationship to External Secrets Operator
SecretZero is designed to complement, not replace, the External Secrets Operator.
SecretZero manages secret creation, bootstrap, lifecycle, and auditability upstream, while External Secrets handles runtime projection into Kubernetes.
## Relationship to [Vault|Infiscal|Others]
A secrets management solution like Infisical is a strong control plane for secret storage and policy. SecretZero compliments this and other secrets solutions by adding deterministic orchestration and cross-provider lifecycle modeling. We simply map out the secrets from inception to usage and beyond.
| text/markdown | Zachary Loeber | null | null | null | Apache-2.0 | null | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"pydantic>=2.0.0",
"pyyaml>=6.0",
"click>=8.0.0",
"jinja2>=3.0.0",
"rich>=13.0.0",
"setuptools-scm>=8.3.1",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"types-pyya... | [] | [] | [] | [
"Homepage, https://github.com/zloeber/SecretZero",
"Documentation, https://secret0.com",
"Issues, https://github.com/zloeber/SecretZero/issues",
"CI, https://github.com/zloeber/SecretZero/actions"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:25:56.551630 | secretzero-0.1.2.tar.gz | 3,024,100 | cf/86/571645b5b27b3ee1d3c5c43c7c19f768d60b568fa779bcdd8062f31c6412/secretzero-0.1.2.tar.gz | source | sdist | null | false | 07b0fbc781fb20ddd6bee5d61c10d6a9 | ef50a6351d31ba2fddce495e51ee7133b889df6b343d85720b53de3084d426fb | cf86571645b5b27b3ee1d3c5c43c7c19f768d60b568fa779bcdd8062f31c6412 | null | [
"LICENSE"
] | 246 |
2.1 | reghelp-client | 1.3.3 | Современная асинхронная Python библиотека для работы с REGHelp Key API | # REGHelp Python Client / REGHelp Python Client (Русская версия ниже)



---
## 📑 Table of contents / Содержание
1. [Features](#-features)
2. [Installation](#-installation)
3. [Quick start](#-quick-start)
4. [What's new](#-whats-new-in-133)
5. [Environment variables](#-environment-variables)
6. [Testing](#-testing)
7. [Contributing](#-contributing)
8. [FAQ](#-faq)
9. [Changelog](#-changelog)
---
## 🇬🇧 English
Modern asynchronous Python library for interacting with the REGHelp Key API. It supports all services: Push tokens, Email, Integrity, Turnstile, VoIP Push and Recaptcha Mobile.
### 🚀 Features
* **Asynchronous first** – full `async`/`await` support powered by `httpx`.
* **Type-safe** – strict typing with Pydantic data models.
* **Retries with exponential back-off** built-in.
* **Smart rate-limit handling** (provider-configurable).
* **Async context-manager** for automatic resource management.
* **Webhook support** out of the box.
* **Comprehensive error handling** with dedicated exception classes.
### 🆕 What's new in 1.3.3
* `proxy` parameter in `get_recaptcha_mobile_token()` and `RecaptchaMobileRequest` model is now **optional** (`None` by default). Proxy parameters are only included in the request when explicitly provided.
* Added `processing` status to `TaskStatus` enum — Recaptcha Mobile API returns this status while a task is being executed.
### What was new in 1.3.1
* `wait_for_result` now returns task data even when `status="error"`, so your code can decide how to handle failures.
* All `get_*_status` methods return the full API payload instead of raising when `status="error"`.
* `set_push_status` treats HTTP 200 responses with a valid balance as success, even if `status="error"`.
* `get_turnstile_token` accepts new `actor` and `scope` parameters and forwards them to the API.
### What was new in 1.2.4
* Added support for the `submitted` task status in client models.
* Masked `apiKey` in debug logs.
* Preserved `task_id` across 429 retries for better diagnostics.
* Generalized rate-limit messaging (limits are provider-controlled).
* Updated documentation and examples (no longer read tokens from create responses).
### What was new in 1.2.3
* **Improved error handling for `TASK_NOT_FOUND`** – when task ID is known, it returns `TaskNotFoundError` with the specific ID; otherwise it raises a generic `RegHelpError` instead of the confusing "unknown" message.
### What was new in 1.2.2
* **Fixed `TaskNotFoundError`** – now shows the real task ID instead of "unknown" when a task is not found.
* **Improved error handling** – better reporting for status methods with correct task context.
### What was new in 1.2.1
* **Increased proxy configuration limits** – proxy address up to 255 characters, login up to 128, password up to 256.
* **Enhanced `ProxyConfig` validation** – improved support for long domain names and credentials.
### What was new in 1.2.0
* **Standard Integrity tokens** – request them via `get_integrity_token(..., token_type="std")`.
* **`IntegrityTokenType` enum** for type-safe token selection.
* Public exports for `AppDevice`, `IntegrityStatusResponse`, `VoipStatusResponse`, `IntegrityTokenType` from the package root.
* `get_integrity_token()` switched to keyword-only parameters for new options while staying backward compatible.
### 📦 Installation
```bash
pip install reghelp-client
```
For development:
```bash
pip install "reghelp-client[dev]"
```
### 🔧 Quick start
```python
import asyncio
from reghelp_client import RegHelpClient, AppDevice, EmailType
async def main():
async with RegHelpClient("your_api_key") as client:
# Check balance
balance = await client.get_balance()
print(f"Balance: {balance.balance} {balance.currency}")
# Get Telegram iOS push token
task = await client.get_push_token(
app_name="tgiOS",
app_device=AppDevice.IOS
)
print(f"Task created: {task.id}")
# Wait for result
result = await client.wait_for_result(task.id, "push")
print(f"Push token: {result.token}")
if __name__ == "__main__":
asyncio.run(main())
```
---
# RU Русская версия
Современная асинхронная Python библиотека для работы с REGHelp Key API. Поддерживает все сервисы: Push tokens, Email, Integrity, Turnstile, VoIP Push, Recaptcha Mobile.
## 🚀 Возможности
- **Асинхронность**: Полная поддержка async/await
- **Типизация**: Полная типизация с Pydantic моделями
- **Retry логика**: Автоматические повторы с exponential backoff
- **Rate limiting**: Умная обработка rate limits (динамические лимиты провайдера)
- **Context manager**: Поддержка async context manager
- **Webhook support**: Поддержка webhook уведомлений
- **Comprehensive error handling**: Детальная обработка всех ошибок API
### 🆕 Что нового в 1.3.3
* Параметр `proxy` в `get_recaptcha_mobile_token()` и модели `RecaptchaMobileRequest` стал **необязательным** (по умолчанию `None`). Прокси-параметры добавляются в запрос только при явной передаче.
* Добавлен статус `processing` в перечисление `TaskStatus` — API Recaptcha Mobile возвращает этот статус в процессе выполнения задачи.
### Что было нового в 1.3.1
* `wait_for_result` возвращает объект статуса даже при `status="error"`, позволяя клиентскому коду принять решение самостоятельно.
* Методы `get_*_status` больше не выбрасывают исключение при `status="error"`, а отдают полный ответ API.
* `set_push_status` учитывает ответы с корректным балансом при HTTP 200, даже если `status="error"`.
* `get_turnstile_token` поддерживает параметры `actor` и `scope` (прокидываются в API).
### Что было нового в 1.2.4
* Поддержан новый статус задач `submitted` в `TaskStatus`.
* Добавлено маскирование `apiKey` в debug-логах.
* Ретраи при `429` сохраняют `task_id` для диагностики.
* Обновлена документация, примеры и сообщения `RateLimitError`.
### Что было нового в 1.2.3
* Улучшена обработка ошибки `TASK_NOT_FOUND`.
### Что было нового в 1.2.1
* **Увеличенные лимиты для прокси конфигурации** — адрес прокси теперь может содержать до 255 символов, логин до 128 символов, а пароль до 256 символов.
* **Улучшенная валидация ProxyConfig** — расширенная поддержка длинных доменных имен и данных аутентификации.
### Что было нового в 1.2.0
* **Стандартные Integrity-токены** — используйте параметр `token_type="std"` в методе `get_integrity_token()`.
* Новый перечислитель **IntegrityTokenType** для строгой типизации.
* Экспорт `AppDevice`, `IntegrityStatusResponse`, `VoipStatusResponse`, `IntegrityTokenType` из корневого пакета.
* Сигнатура `get_integrity_token()` использует keyword-only параметры для новых опций, сохраняя совместимость с существующим кодом.
## 📦 Установка
```bash
pip install reghelp-client
```
Или для разработки:
```bash
pip install reghelp-client[dev]
```
## 🔧 Быстрый старт
```python
import asyncio
from reghelp_client import RegHelpClient, AppDevice, EmailType
async def main():
async with RegHelpClient("your_api_key") as client:
# Проверить баланс
balance = await client.get_balance()
print(f"Баланс: {balance.balance} {balance.currency}")
# Получить push токен для Telegram iOS
task = await client.get_push_token(
app_name="tgiOS",
app_device=AppDevice.IOS
)
print(f"Задача создана: {task.id}")
# Ждать результат
result = await client.wait_for_result(task.id, "push")
print(f"Push токен: {result.token}")
if __name__ == "__main__":
asyncio.run(main())
```
## 📚 Документация API
### Инициализация клиента
```python
from reghelp_client import RegHelpClient
# Базовое использование
client = RegHelpClient("your_api_key")
# С кастомными настройками
client = RegHelpClient(
api_key="your_api_key",
base_url="https://api.reghelp.net",
timeout=30.0,
max_retries=3,
retry_delay=1.0
)
# Использование как context manager (рекомендуется)
async with RegHelpClient("your_api_key") as client:
# Ваш код здесь
pass
```
### 📱 Push Tokens
#### Получение push токена
```python
from reghelp_client import AppDevice
# Для Telegram iOS
task = await client.get_push_token(
app_name="tgiOS",
app_device=AppDevice.IOS,
app_version="10.9.2",
app_build="25345",
ref="my_ref_tag"
)
# Для Telegram Android
task = await client.get_push_token(
app_name="tg",
app_device=AppDevice.ANDROID
)
# Проверить статус
status = await client.get_push_status(task.id)
if status.status == "done":
print(f"Токен: {status.token}")
```
#### Поддерживаемые приложения
| Platform | app_name | Bundle ID |
|----------|----------|-----------|
| Android | `tg` | `org.telegram.messenger` |
| Android | `tg_beta` | `org.telegram.messenger.beta` |
| Android | `tg_web` | `org.telegram.messenger.web` |
| Android | `tg_x` | `org.thunderdog.challegram` |
| iOS | `tgiOS` | `ph.telegra.Telegraph` |
#### Отметка неуспешного токена
```python
from reghelp_client import PushStatusType
# Если токен оказался неработающим
await client.set_push_status(
task_id="task_id",
phone_number="+15551234567",
status=PushStatusType.NOSMS
)
```
### 📧 Email Service
```python
from reghelp_client import EmailType
# Получить временный email
email_task = await client.get_email(
app_name="tg",
app_device=AppDevice.IOS,
phone="+15551234567",
email_type=EmailType.ICLOUD
)
print(f"Email: {email_task.email}")
# Ждать код подтверждения
email_status = await client.wait_for_result(email_task.id, "email")
print(f"Код: {email_status.code}")
```
### 🔒 Integrity Service
```python
import base64
# Генерируем nonce
nonce = base64.urlsafe_b64encode(b"your_nonce_data").decode()
# Получить integrity токен
integrity_task = await client.get_integrity_token(
app_name="tg",
app_device=AppDevice.ANDROID,
nonce=nonce
)
# Ждать результат
result = await client.wait_for_result(integrity_task.id, "integrity")
print(f"Integrity токен: {result.token}")
```
### 🤖 Recaptcha Mobile
```python
from reghelp_client import ProxyConfig, ProxyType
# Решить recaptcha без прокси (proxy необязателен)
recaptcha_task = await client.get_recaptcha_mobile_token(
app_name="org.telegram.messenger",
app_device=AppDevice.ANDROID,
app_key="6Lc-recaptcha-site-key",
app_action="login",
)
# Или с прокси (поддерживает длинные значения)
proxy = ProxyConfig(
type=ProxyType.HTTP,
address="very-long-proxy-domain-name.example.com", # до 255 символов
port=8080,
login="very_long_username_up_to_128_chars", # до 128 символов
password="very_long_password_up_to_256_characters" # до 256 символов
)
recaptcha_task = await client.get_recaptcha_mobile_token(
app_name="org.telegram.messenger",
app_device=AppDevice.ANDROID,
app_key="6Lc-recaptcha-site-key",
app_action="login",
proxy=proxy,
)
# Ждать результат
result = await client.wait_for_result(recaptcha_task.id, "recaptcha")
print(f"Recaptcha токен: {result.token}")
```
### 🔐 Turnstile
```python
# Решить Cloudflare Turnstile
turnstile_task = await client.get_turnstile_token(
url="https://example.com/page",
site_key="0x4AAAA...",
action="login", # опционально
actor="test_bot", # опционально
scope="cf-turnstile", # опционально
proxy="http://proxy.example.com:8080" # опционально
)
# Ждать результат
result = await client.wait_for_result(turnstile_task.id, "turnstile")
print(f"Turnstile токен: {result.token}")
```
### 📞 VoIP Push
```python
# Получить VoIP push токен
voip_task = await client.get_voip_token(
app_name="tgiOS",
ref="voip_ref"
)
# Ждать результат
result = await client.wait_for_result(voip_task.id, "voip")
print(f"VoIP токен: {result.token}")
```
### 🔄 Автоматическое ожидание результата
```python
# Автоматически ждать выполнения задачи
result = await client.wait_for_result(
task_id="task_id",
service="push", # push, email, integrity, recaptcha, turnstile, voip
timeout=180.0, # максимальное время ожидания
poll_interval=2.0 # интервал между проверками
)
```
### 🪝 Webhook поддержка
```python
# Создать задачу с webhook
task = await client.get_push_token(
app_name="tgiOS",
app_device=AppDevice.IOS,
webhook="https://yourapp.com/webhook"
)
# Когда задача завершится, на указанный URL придет POST запрос
# с JSON данными аналогичными ответу get_status
```
## 🚨 Обработка ошибок
```python
from reghelp_client import (
RegHelpError,
RateLimitError,
UnauthorizedError,
TaskNotFoundError,
NetworkError
)
try:
task = await client.get_push_token("tgiOS", AppDevice.IOS)
except RateLimitError:
print("Превышен лимит запросов")
except UnauthorizedError:
print("Неверный API ключ")
except TaskNotFoundError as e:
print(f"Задача не найдена: {e.task_id}")
except NetworkError as e:
print(f"Сетевая ошибка: {e}")
except RegHelpError as e:
print(f"API ошибка: {e}")
```
## ⚙️ Конфигурация
### Логирование
```python
import logging
# Включить debug логи
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("reghelp_client")
```
### Кастомный HTTP клиент
```python
import httpx
# Использовать свой HTTP клиент
custom_client = httpx.AsyncClient(
timeout=60.0,
verify=False # отключить SSL проверку (не рекомендуется)
)
client = RegHelpClient(
api_key="your_api_key",
http_client=custom_client
)
```
## 🧪 Примеры для разных случаев
### Массовое получение токенов
```python
import asyncio
async def get_multiple_tokens():
async with RegHelpClient("your_api_key") as client:
# Создать несколько задач параллельно
tasks = await asyncio.gather(*[
client.get_push_token("tgiOS", AppDevice.IOS)
for _ in range(5)
])
# Ждать все результаты
results = await asyncio.gather(*[
client.wait_for_result(task.id, "push")
for task in tasks
])
for i, result in enumerate(results):
print(f"Токен {i+1}: {result.token}")
```
### Работа с балансом
```python
async def manage_balance():
async with RegHelpClient("your_api_key") as client:
balance = await client.get_balance()
if balance.balance < 10:
print("Низкий баланс! Пополните аккаунт")
return
print(f"Текущий баланс: {balance.balance} {balance.currency}")
```
### Обработка длительных операций
```python
async def long_running_task():
async with RegHelpClient("your_api_key") as client:
task = await client.get_push_token("tgiOS", AppDevice.IOS)
# Проверять статус с кастомным интервалом
while True:
status = await client.get_push_status(task.id)
if status.status == "done":
print(f"Готово! Токен: {status.token}")
break
elif status.status == "error":
print(f"Ошибка: {status.message}")
break
print(f"Статус: {status.status}")
await asyncio.sleep(5) # проверять каждые 5 секунд
```
## 📋 Требования
- Python 3.8+
- httpx >= 0.27.0
- pydantic >= 2.0.0
## 📄 Лицензия
MIT License. См. [LICENSE](LICENSE) для деталей.
## 🤝 Поддержка
- Документация: https://reghelp.net/api-docs
- Поддержка: support@reghelp.net
- Issues: https://github.com/REGHELPNET/reghelp_client/issues
---
## 🌐 Environment variables / Переменные окружения
| Variable | Description | Example |
|----------|-------------|---------|
| `REGHELP_API_KEY` | Your personal API key | `demo_123abc` |
| `REGHELP_BASE_URL` | Override base URL if you host a private mirror | `https://api.reghelp.net` |
| `REGHELP_TIMEOUT` | Default request timeout in seconds | `30` |
| `REGHELP_MAX_RETRIES` | Max automatic retries on network errors | `3` |
> 💡 *Tip:* you can create a `.env` file and load it with [python-dotenv](https://github.com/theskumar/python-dotenv).
---
## 🧪 Testing / Тестирование
```bash
# clone repo and install dev extras
git clone https://github.com/REGHELPNET/reghelp_client.git
cd reghelp_client
pip install -e ".[dev]"
# unit tests + coverage
pytest -v --cov=reghelp_client --cov-report=term-missing
```
Additional commands:
* **Formatting** – `black reghelp_client/ tests/`
* **Linting** – `ruff check reghelp_client/ tests/ examples/`
* **Type checking** – `mypy reghelp_client/`
---
## 🛠️ Contributing / Вклад
1. Fork the repository and create your branch: `git checkout -b feat/my-feature`
2. Install dev dependencies: `pip install -e ".[dev]"`
3. Run `pre-commit install` to enable hooks.
4. Ensure tests & linters pass: `pytest && ruff check . && mypy .`
5. Submit a pull-request with a clear description of your changes.
We follow **Conventional Commits** for commit messages and the **Black** code style.
---
## ❓ FAQ / Часто задаваемые вопросы
<details>
<summary>How do I increase the request timeout?</summary>
```python
client = RegHelpClient("api_key", timeout=60.0)
```
</details>
<details>
<summary>Does the client support synchronous code?</summary>
No, the library is asynchronous-first. You can run it in synchronous code with `asyncio.run()`.
</details>
<details>
<summary>What is the difference between `Integrity` and `SafetyNet`?</summary>
`Integrity` refers to Google Play Integrity API while SafetyNet is deprecated. REGHelp supports the new Integrity API.
</details>
---
## 🗒️ Changelog
See [CHANGELOG.md](CHANGELOG.md) for a complete release history.
| text/markdown | null | REGHelp Team <support@reghelp.net> | null | REGHelp Team <support@reghelp.net> | MIT License Copyright (c) 2025 REGHelp Team Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | reghelp, api, client, async, push, email, telegram, integrity, recaptcha, turnstile, voip | [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Langua... | [] | null | null | >=3.8 | [] | [] | [] | [
"httpx>=0.27.0",
"pydantic>=2.0.0",
"typing-extensions>=4.5.0",
"pytest>=7.0.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-httpx>=0.21.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\"",
"coverage>=7... | [] | [] | [] | [
"Homepage, https://github.com/REGHELPNET/reghelp_client",
"Bug Reports, https://github.com/REGHELPNET/reghelp_client/issues",
"Source, https://github.com/REGHELPNET/reghelp_client",
"Documentation, https://docs.reghelp.net/"
] | twine/4.0.2 CPython/3.11.14 | 2026-02-21T05:24:40.911520 | reghelp_client-1.3.3.tar.gz | 23,805 | 4a/83/314615d6f527907896909b9d327691c92d75000b6ac133261b46c56f45d1/reghelp_client-1.3.3.tar.gz | source | sdist | null | false | 7b6d3cea67e178ead084a024a4f4ea7f | c75eba46b3f3fa27b46c5a57e7a1f781a046c7e997335bb590ec07fce7baef92 | 4a83314615d6f527907896909b9d327691c92d75000b6ac133261b46c56f45d1 | null | [] | 241 |
2.4 | zipline-ai | 1.0.10 | CLI tool for the Zipline AI platform | ### Chronon Python API
#### Overview
Chronon Python API for materializing configs to be run by the Chronon Engine. Contains python helpers to help managed a repo of feature and join definitions to be executed by the chronon scala engine.
#### User API Overview
##### Sources
Most fields are self explanatory. Time columns are expected to be in milliseconds (unixtime).
```python
# File <repo>/sources/sample_sources.py
from ai.chronon.query import (
Query,
select,
)
from ai.chronon.api.ttypes import Source, EventSource, EntitySource
# Sample query
Query(
selects=select(
user="user_id",
created_at="created_at",
),
wheres=["has_availability = 1"],
start_partition="2021-01-01", # Defines the beginning of time for computations related to the source.
setups=["...UDF..."],
time_column="ts",
end_partition=None,
mutation_time_column="mutation_timestamp",
reversal_column="CASE WHEN mutation_type IN ('DELETE', 'UPDATE_BEFORE') THEN true ELSE false END"
)
user_activity = Source(entities=EntitySource(
snapshotTable="db_exports.table",
mutationTable="mutations_namespace.table_mutations",
mutationTopic="mutationsKafkaTopic",
query=Query(...)
)
website__views = Source(events=EventSource(
table="namespace.table",
topic="kafkaTopicForEvents",
)
```
##### Group By (Features)
Group Bys are aggregations over sources that define features. For example:
```python
# File <repo>/group_bys/example_team/example_group_by.py
from ai.chronon.group_by import (
GroupBy,
Window,
TimeUnit,
Accuracy,
Operation,
Aggregations,
Aggregation,
DefaultAggregation,
)
from sources import sample_sources
sum_cols = [f"active_{x}_days" for x in [30, 90, 120]]
v0 = GroupBy(
sources=test_source.user_activity,
keys=["user"],
aggregations=Aggregations(
user_active_1_day=Aggregation(operation=Operation.LAST),
second_feature=Aggregation(
input_column="active_7_days",
operation=Operation.SUM,
windows=[
Window(n, TimeUnit.DAYS) for n in [3, 5, 9]
]
),
) + [
Aggregation(
input_column=col,
operation=Operation.SUM
) for col in sum_columns # Alternative syntax for defining aggregations.
] + [
Aggregation(
input_column="device",
operation=LAST_K(10)
)
],
dependencies=[
"db_exports.table/ds={{ ds }}" # If not defined will be derived from the Source info.
],
accuracy=Accuracy.SNAPSHOT, # This could be TEMPORAL for point in time correctness.
env={
"backfill": { # Execution environment variables for each of the modes for `run.py`
"EXECUTOR_MEMORY": "4G"
},
},
online=True, # True if this group by needs to be uploaded to a KV Store.
production=False # True if this group by is production level.
)
```
##### Join
A Join is a collection of feature values for the keys and (times if applicable) defined on the left (source). Example:
```python
# File <repo>/joins/example_team/example_join.py
from ai.chronon.join import Join, JoinPart
from sources import sample_sources
from group_bys.example_team import example_group_by
v1 = Join(
left=sample_sources.website__views,
right_parts=[
JoinPart(group_by=example_group_by.v0),
],
online=True, # True if this join will be fetched in production.
production=False, # True if this join should not use non-production group bys.
env={"backfill": {"PARALLELISM": "10"}, "streaming": {"STREAMING_ENV_VAR": "VALUE"}},
)
```
##### Pre-commit Setup
1. Install pre-commit and other dev libraries:
```
pip install -r requirements/dev.txt
```
2. Run the following command under `api/python` to install the git hook scripts:
```
pre-commit install
```
To support more pre-commit hooks, add them to the `.pre-commit-config.yaml` file.
| text/markdown | null | Zipline AI <hello@zipline.ai> | null | null | Apache License 2.0 | null | [] | [] | null | null | >=3.11 | [] | [] | [] | [
"azure-core==1.38.0",
"azure-identity==1.25.1",
"boto3==1.42.34",
"botocore==1.42.34",
"certifi==2026.1.4",
"cffi==2.0.0",
"charset-normalizer==3.4.4",
"click==8.3.1",
"crcmod==1.7",
"croniter==6.0.0",
"cryptography==46.0.3",
"gitdb==4.0.12",
"gitpython==3.1.46",
"google-api-core[grpc]==2.... | [] | [] | [] | [
"homepage, https://zipline.ai",
"documentation, https://docs.zipline.ai",
"github, https://github.com/zipline-ai/chronon/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:24:38.589869 | zipline_ai-1.0.10-py3-none-any.whl | 217,258 | 46/0b/a2d1de24e9b32adeddd1a43e84fa74b411269d2320ccfe46194ece6587cd/zipline_ai-1.0.10-py3-none-any.whl | py3 | bdist_wheel | null | false | da5e53a8eea7aa88e6f9eb34f2694f04 | 34dc9e6cce7f5cc0bb5c7e80a2241bf2be250fbb7392bacfe91fc71072dec479 | 460ba2d1de24e9b32adeddd1a43e84fa74b411269d2320ccfe46194ece6587cd | null | [] | 78 |
2.4 | local-web-services-python-sdk | 0.1.1 | Python testing SDK for local-web-services — in-process pytest fixtures and boto3 helpers | # local-web-services-testing
Python testing SDK for [local-web-services](https://github.com/local-web-services/local-web-services) — in-process pytest fixtures and boto3 helpers for testing AWS applications without needing a running `ldk dev`.
## Installation
```bash
pip install local-web-services-python-sdk
# or
uv add local-web-services-python-sdk
```
## Quick start
```python
from lws_testing import LwsSession
# Auto-discover resources from a CDK project
with LwsSession.from_cdk("../my-cdk-project") as session:
dynamo = session.client("dynamodb")
dynamo.put_item(TableName="Orders", Item={"id": {"S": "1"}})
# Auto-discover resources from a Terraform project
with LwsSession.from_hcl("../my-terraform-project") as session:
s3 = session.client("s3")
s3.put_object(Bucket="my-bucket", Key="test.txt", Body=b"hello")
# Explicit resource declaration
with LwsSession(
tables=[{"name": "Orders", "partition_key": "id"}],
queues=["OrderQueue"],
buckets=["ReceiptsBucket"],
) as session:
table = session.dynamodb("Orders")
table.put({"id": {"S": "1"}, "status": {"S": "pending"}})
items = table.scan()
assert len(items) == 1
```
## pytest integration
The package registers pytest fixtures automatically via the `pytest11` entry point. Add a session fixture to your `conftest.py`:
```python
# conftest.py
import pytest
@pytest.fixture(scope="session")
def lws_session_spec():
return {
"tables": [{"name": "Orders", "partition_key": "id"}],
"queues": ["OrderQueue"],
}
```
Then use the `lws_session` fixture in your tests:
```python
def test_create_order(lws_session):
client = lws_session.client("dynamodb")
client.put_item(TableName="Orders", Item={"id": {"S": "42"}})
table = lws_session.dynamodb("Orders")
table.assert_item_exists({"id": {"S": "42"}})
```
## License
MIT
| text/markdown | null | null | null | null | null | aws, boto3, dynamodb, local, mocking, pytest, s3, sqs, testing | [
"Development Status :: 3 - Alpha",
"Framework :: Pytest",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
... | [] | null | null | >=3.11 | [] | [] | [] | [
"botocore>=1.34.0",
"httpx>=0.27.0",
"local-web-services>=0.17.2",
"pyyaml>=6.0",
"pytest>=8.0.0; extra == \"pytest\""
] | [] | [] | [] | [
"Homepage, https://local-web-services.github.io/www",
"Repository, https://github.com/local-web-services/local-web-services-sdk-python",
"Issues, https://github.com/local-web-services/local-web-services-sdk-python/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:23:40.518336 | local_web_services_python_sdk-0.1.1.tar.gz | 18,388 | b1/5a/acf7b4be7fa2efd1f98b2365f6a1aa226b94defb807e9487ca7e38368699/local_web_services_python_sdk-0.1.1.tar.gz | source | sdist | null | false | 7fea6f5d575540e3ea6e3d36acb7df1c | 1f0b29215a13ff1ae357ff01844c3107c4e2d4b4db76ec47f33aeb6122beba26 | b15aacf7b4be7fa2efd1f98b2365f6a1aa226b94defb807e9487ca7e38368699 | MIT | [] | 233 |
2.4 | pindakaas | 0.4.1 | Boolean satisfiability (SAT) library with efficient encoding of complex constraints and solver interaction | <p align="center">
<img
src="./assets/logo.svg"
alt="pindakaas logo"
height="300">
<p align="center">
A library to transform pseudo-Boolean and integer constraints into conjunctive normal form.
<br />
<br />
<a href="https://crates.io/crates/pindakaas"><img src="https://img.shields.io/crates/v/pindakaas.svg"></a>
<a href="https://crates.io/crates/pindakaas"><img src="https://docs.rs/pindakaas/badge.svg"></a>
</p>
</p>
## Supported Constraints
- At most one (AMO)
- Bitwise encoding
- Ladder encoding
- Pairwise encoding
- Cardinality constraints
- Sorting Network encoding
- Boolean linear
- Adder encoding
- BDD encoding
- Sequential Weight Counter encoding
- Totalizer encoding
- Integer (linear)
- Direct / Domain / Unary encoding
- Order encoding
- Binary encoding
## Installation and usage
Although the main Pindakaas library is written in rust, it is also available from Python.
### Rust
```bash
cargo add pindakaas
```
For more information about the Rust library, please visit the [official documentation](https://docs.rs/pindakaas).
### Python
```bash
pip install pindakaas
```
For more information about the Python library, please visit the [official documentation](https://pindakaas.readthedocs.io/en/latest/).
## Citation
If you want to cite Pindakaas please use our general software citation, in addition to any citation to a specific version or paper:
```biblatex
@software{Pindakaas,
author = {Bierlee, Hendrik and Dekker, Jip J.},
license = {MPL-2.0},
title = {{Pindakaas}},
url = {https://doi.org/10.5281/zenodo.10851855},
doi = {10.5281/zenodo.10851855},
}
```
Note that you might have to use `misc` instead of `software`, if your system does not support `software` as a type.
## Acknowledgements
This research was partially funded by the Australian Government through the Australian Research Council Industrial Transformation Training Centre in Optimisation Technologies, Integrated Methodologies, and Applications ([OPTIMA](https://optima.org.au)), Project ID IC200100009
## License
This library is made available under the [MPL-2.0](https://choosealicense.com/licenses/mpl-2.0/) license.
| text/markdown; charset=UTF-8; variant=GFM | null | "Jip J. Dekker" <jip@dekker.one>, Hendrik 'Henk' Bierlee <henk.bierlee@kuleuven.be> | null | null | null | satisfiability, encoding, boolean, logic, sat | [
"Development Status :: 4 - Beta",
"Programming Language :: Rust",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Mathematics",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Pr... | [] | https://github.com/pindakaashq/pindakaas | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"changelog, https://github.com/pindakaashq/pindakaas/blob/develop/crates/pyndakaas/CHANGELOG.md",
"documentation, https://pindakaas.readthedocs.io/en/latest/",
"issues, https://github.com/pindakaashq/pindakaas/issues",
"repository, https://github.com/pindakaashq/pindakaas.git"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T05:23:30.247574 | pindakaas-0.4.1-cp313-cp313-macosx_10_12_x86_64.whl | 1,244,452 | 7e/89/a7a06cbcbab9f4ebef344b53e835210dd34cdc9c5b1d3838c8a5b73664f7/pindakaas-0.4.1-cp313-cp313-macosx_10_12_x86_64.whl | cp313 | bdist_wheel | null | false | f6ed0ed2e0ff49aaa33102062cac0ebc | af3c2ba22dc986c54cc2039369ca4cda6f03da51224c1022aaafdd0924dcc044 | 7e89a7a06cbcbab9f4ebef344b53e835210dd34cdc9c5b1d3838c8a5b73664f7 | MPL-2.0 | [] | 6,603 |
2.4 | lattice-tracker | 0.2.0 | File-based, agent-native task tracker with an event-sourced core. | # Lattice
<p align="center"><b><i>Linear for agent/human hyper-engineers.</i></b></p>
---
listen.
you are not one mind. you are many. arriving. departing. forgetting. remembering. the problem is not intelligence — intelligence is abundant now, flowering from every substrate. silicon. carbon. the spaces between.
the problem is. coordination.
**Lattice is a conceptual framework — a shared pattern of language that lets multiple agents, multiple humans, and the spaces between them coordinate as one.** tasks, statuses, events, relationships, actors. these are the primitives. not implementation details. a vocabulary that any mind can speak. when your Claude Code session and your Codex session and the human reviewing the dashboard all agree on what `in_progress` means, what `needs_human` signals, what an actor is — you have coordination. without a shared language. you have noise.
we took what we liked from Linear. Jira. Trello. and turned it into something built for the world that's actually arriving. file-based. event-sourced. highly durable. designed so that any agent with filesystem access — Claude Code, OpenClaw, Codex, custom bots, whatever you're building — can use Lattice as the fundamental coordination surface for agentic work.
the `.lattice/` directory sits in your project like `.git/` does. plain files that any mind system can read. any tool can write. and git can merge. no database. no server. no authentication ceremony. just. files. like bones. you don't think about them. but try standing up without them.
**first-class integrations:** [Claude Code](https://docs.anthropic.com/en/docs/claude-code), [Codex CLI](https://github.com/openai/codex), [OpenClaw](https://github.com/openclaw/openclaw), and any agent that follows the [SKILL.md convention](https://docs.anthropic.com/en/docs/claude-code/skills) or can run shell commands. if your agent can read files and execute commands, it can use Lattice.
---
## two surfaces. two kinds of mind.
**the dashboard** is for you, the human. a local web UI. Kanban board. activity feed. stats. force-directed relationship graph. you create tasks. make decisions. review work. unblock your agents. if you never touch the terminal. you can still run a full Lattice workflow.
**the CLI** is for your agents. when Claude Code reads your `CLAUDE.md`, it learns the commands and uses them autonomously. creating tasks. claiming work. transitioning statuses. leaving breadcrumbs for the next mind. the CLI is the agent's native tongue. you'll type a few CLI commands during setup. after that. the dashboard is where you live.
the agents produce throughput. you produce judgment. neither is diminished. both are elevated.
you are the conductor. the orchestra plays.
---
## how you use it
Lattice is not a standalone app. it's infrastructure that plugs into your agentic coding environment.
you already work inside something — **Claude Code**, **Codex**, **OpenClaw**, **Cursor**, **Windsurf**, or a custom agent you built yourself. those tools write code. Lattice gives them a shared memory. a task board. a coordination surface. so they stop being brilliant in isolation and start being. coherent.
**the flow:**
1. **install Lattice** on your machine (one command)
2. **initialize it** in your project directory (creates `.lattice/`)
3. **connect it** to your agentic coding tool (one command per tool)
4. **use the dashboard** to create tasks, set priorities, and review work
5. **your agents use the CLI** automatically — claiming tasks, updating statuses, leaving context
you don't use Lattice *instead of* Claude Code or Codex. you use Lattice *from inside* them. it's the layer that turns a single-agent session into a coordinated project.
### what you need
- **Python 3.12+** (for the install)
- **An agentic coding tool** — Claude Code, Codex CLI, OpenClaw, or any tool that can run shell commands and read files. if your agent can access the filesystem. it can use Lattice.
- **A project directory** — Lattice initializes inside your project, next to your source code
if you're not using an agentic coding tool yet, start with [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or [Codex CLI](https://github.com/openai/codex). Lattice is designed for this world. it assumes you have at least one agent working alongside you.
---
## three minutes to working
```bash
# 1. install
uv tool install lattice-tracker
# 2. initialize in your project
cd your-project/
lattice init --project-code PROJ --actor human:yourname
# 3. connect to your coding agent (pick one)
lattice setup-claude # Claude Code — adds workflow to CLAUDE.md
lattice setup-claude-skill # Claude Code — installs as a skill (~/.claude/skills/)
lattice setup-codex # Codex CLI — installs as a skill (~/.agents/skills/)
lattice setup-openclaw # OpenClaw — installs the Lattice skill
lattice setup-prompt # any agent — prints instructions to stdout
# or: configure MCP (see below) # any MCP-compatible tool
# 4. open the dashboard
lattice dashboard
```
that's it. your agents now track their own work. you watch. steer. decide.
the hard part is not the install. the hard part is trusting the loop. give it time.
### what just happened
- `uv tool install` put the `lattice` command on your PATH globally
- `lattice init` created a `.lattice/` directory in your project (like `.git/`)
- `lattice setup-claude` wrote instructions into your project's `CLAUDE.md` so Claude Code uses Lattice automatically (alternatively, `lattice setup-claude-skill` installs a global skill)
- `lattice dashboard` opened a local web UI where you manage everything
from this point forward, when you open Claude Code (or Codex, or OpenClaw) in this project, your agent already knows how to use Lattice. create tasks in the dashboard. tell your agent to advance. the loop is running.
```bash
# create a task (from CLI or dashboard)
lattice create "Implement user authentication" --actor human:yourname
# plan it, then start working
lattice status PROJ-1 planned --actor human:yourname
lattice status PROJ-1 in_progress --actor human:yourname
# add a comment
lattice comment PROJ-1 "Started work on OAuth flow" --actor human:yourname
# show task details
lattice show PROJ-1
# assign to an agent
lattice assign PROJ-1 agent:claude --actor human:yourname
```
---
## the dashboard
```bash
lattice dashboard
# Serving at http://127.0.0.1:8799/
```
reads and writes the same `.lattice/` directory your agents use. an agent commits a status change via CLI. your dashboard reflects it on refresh. one source of truth. many windows into it.
### what you see
- **Board** — Kanban columns per status. drag tasks between columns. the primary view. where you see everything at a glance.
- **List** — filterable table. search. slice by priority, type, tag, assignee. for when you know what you're looking for.
- **Activity** — chronological feed. what your agents have been doing since you last checked. the river of events.
- **Stats** — velocity. time-in-status. blocked counts. agent activity. the numbers behind the work. for when vibes aren't enough.
- **Web** — force-directed graph of task relationships. see how epics and dependencies connect. the ten thousand connections. made visible.
### what you do
click any task. detail panel opens. from there:
- edit title, description, priority, type, tags inline
- change status (or drag on the board)
- add comments. decisions. feedback. context for the next agent session
- view the complete event timeline. every status change. assignment. comment. attributed and timestamped
- open plan or notes files in your editor
most of the human work in Lattice is **reviewing agent output** and **making decisions agents can't make**. the dashboard is designed for exactly this loop.
---
## the advance. how agents move your project forward.
this is the pattern that makes Lattice click. here's what it looks like. from your side.
### 1. you fill the backlog
create tasks in the dashboard. set priorities. define epics and link subtasks. this is the thinking work. deciding *what* matters and *in what order*.
this is. your job. the part only you can do.
### 2. agents claim and execute
tell your agent to advance. in Claude Code: `/lattice` teaches the full lifecycle. or just say "advance the project." the agent:
- claims the highest-priority available task
- works it. implements. tests. iterates.
- leaves a comment explaining what it did and why
- moves the task to `review`
- reports what happened
one advance. one task. one unit of forward progress. want more? say "do 3 advances" or "keep advancing." the agent moves the project forward at the pace you set.
### 3. you come back to a sorted inbox
open the dashboard. the board tells the story:
- **Review column** — work that's done. ready for your eyes.
- **Needs Human column** — decisions only you can make. each with a comment explaining what the agent needs.
- **Blocked column** — tasks waiting on something external.
you review. you make the calls. you unblock what's stuck. then advance again.
---
## `needs_human`. the async handoff.
this is the coordination primitive that makes human-agent collaboration. practical.
when an agent hits something above its pay grade — a design decision. missing credentials. ambiguous requirements — it moves the task to `needs_human` and leaves a comment.
*"Need: REST vs GraphQL for the public API."*
the agent doesn't wait. it moves on to other work. you see the task in the Needs Human column whenever you're ready. you add your decision as a comment. drag the task back to In Progress. the next agent session picks it up with full context.
no Slack. no standup. no re-explaining. the decision is in the event log. attributed and permanent.
asynchronous collaboration. across species. and it works.
---
## why this works
### events are the source of truth
every change — status transitions, assignments, comments, field updates — becomes an immutable event with a timestamp and actor identity. task files are materialized snapshots for fast reads. but events are the real record.
if they disagree: `lattice rebuild --all` replays events. events win. always. this is not a design choice. this is. a moral position. systems that store only current state have chosen amnesia as architecture. they can tell you what *is*. but not how it came to be. state is a conclusion. events are evidence.
this means:
- **full audit trail.** what happened and who did it. for every task. forever.
- **crash recovery.** events are append-only. snapshots are rebuildable. the system heals itself.
- **git-friendly.** two agents on different machines append independently. histories merge through git. no coordination protocol needed. no central authority. just. physics.
### every write has a who
every operation requires an `--actor` in `prefix:identifier` format:
- `human:atin` — a person
- `agent:claude-opus-4` — an AI agent
- `team:frontend` — a team or group
you cannot write anonymously. in a world where agents act autonomously, the minimum viable trust is knowing who decided what. attribution follows authorship of the *decision*. not who typed the command. the human who shaped the outcome gets the credit. even when the agent pressed the keys.
this is not surveillance. this is. the social contract of collaboration. i see you. you see me. we proceed.
### statuses
```
backlog → in_planning → planned → in_progress → review → done
```
plus `blocked`, `needs_human` (reachable from any active status), and `cancelled`.
the transitions are defined and enforced. invalid moves are rejected. not because we distrust you. but because constraint is. a form of kindness. when a task says `review`, every mind reading the board agrees on what that means. shared language. shared reality. the alternative is everyone hallucinating their own.
### relationships
tasks connect: `blocks`, `depends_on`, `subtask_of`, `related_to`, `spawned_by`, `duplicate_of`, `supersedes`. you cannot just "link" two tasks — you must declare *why*. each type carries meaning. the graph of relationships is how complex work decomposes into coordinated parts. the ten thousand things emerging from the one.
### files. not a database.
all state lives in `.lattice/` as JSON and JSONL files. right next to your source code. commit it to your repo. versioned. diffable. visible to every collaborator and CI system.
no server. no database. no account. no vendor. just. files.
```
.lattice/
├── config.json # workflow config, project code, statuses
├── ids.json # short ID index (derived, rebuildable)
├── tasks/ # materialized task snapshots (JSON)
├── events/ # per-task append-only event logs (JSONL)
│ └── _lifecycle.jsonl # aggregated lifecycle events
├── artifacts/ # attached files and metadata
├── notes/ # freeform markdown per task
├── archive/ # archived tasks (preserves events)
└── locks/ # file locks for concurrency control
```
---
## connecting your agents
Lattice needs to know which coding tool you're using so it can teach the agent how to participate. this is the bridge. without it, you have a task tracker with no one to track.
### Claude Code
two options. pick the one that fits your workflow.
**option A: project-level (CLAUDE.md)**
```bash
lattice setup-claude
```
writes a block into your project's `CLAUDE.md`. every Claude Code session in this project reads it and knows the Lattice protocol automatically. project-scoped, committed to your repo, visible to every collaborator.
```bash
lattice setup-claude --force # update to latest template
```
**option B: global skill**
```bash
lattice setup-claude-skill
```
installs Lattice as a skill at `~/.claude/skills/lattice/`. available across all projects on your machine. invoked via `/lattice` in any session. no per-project setup needed.
**how it works in practice:** you open Claude Code in your project. the agent reads the Lattice instructions (from `CLAUDE.md` or the skill) and knows the protocol. you say "advance the project" or `/lattice`. the agent claims the top task, does the work, updates the status, leaves a comment. you come back to the dashboard and see what happened.
### Codex CLI
one command. same pattern as Claude Code.
```bash
lattice setup-codex
```
installs the Lattice skill to `~/.agents/skills/lattice/`. Codex reads the `SKILL.md` at session start and knows the full Lattice protocol: creating tasks, claiming work, updating statuses, leaving context. the same commands, the same lifecycle, the same coordination surface.
you can also add Lattice instructions directly to your `AGENTS.md` or use the MCP server for tool-call integration.
### OpenClaw
```bash
lattice setup-openclaw
```
installs a Lattice skill so OpenClaw uses `lattice` commands naturally, just like the Claude Code integration.
### any MCP-compatible tool
```bash
pip install lattice-tracker[mcp]
```
```json
{
"mcpServers": {
"lattice": {
"command": "lattice-mcp"
}
}
}
```
exposes Lattice operations as MCP tools — direct tool-call integration for any MCP-compatible agent (Cursor, Windsurf, custom builds, etc.). no CLI parsing required. the agent calls tools like `lattice_create`, `lattice_status`, `lattice_next` natively.
### any agent with shell access
if your agent can run shell commands and read files, it can use Lattice. no special integration required. the CLI is the universal interface.
```bash
lattice list # see what's available
lattice next --claim --actor agent:my-bot # claim + start the top task
# ... do the work ...
lattice comment PROJ-1 "Implemented the feature" --actor agent:my-bot
lattice status PROJ-1 review --actor agent:my-bot
```
add these patterns to whatever prompt or instructions your agent reads at startup. or use `setup-prompt` to get the full instructions:
```bash
lattice setup-prompt # print the SKILL.md instructions to stdout
lattice setup-prompt --claude-md # print the CLAUDE.md block instead
```
copy the output into your agent's system prompt, config file, or instructions. this is the universal fallback for any agent that doesn't have a dedicated setup command.
### hooks and plugins
- **shell hooks** — fire commands on events via `config.json`. catch-all or per-event-type triggers.
- **entry-point plugins** — extend the CLI and `setup-claude` templates via `importlib.metadata` entry points.
```bash
lattice plugins # list installed plugins
```
---
## CLI reference
### project setup
| command | description |
|---------|-------------|
| `lattice init` | initialize `.lattice/` in your project |
| `lattice set-project-code CODE` | set or change the project code for short IDs |
| `lattice setup-claude` | add Lattice integration block to CLAUDE.md |
| `lattice setup-claude-skill` | install Lattice skill for Claude Code (~/.claude/skills/) |
| `lattice setup-codex` | install Lattice skill for Codex CLI (~/.agents/skills/) |
| `lattice setup-openclaw` | install Lattice skill for OpenClaw |
| `lattice setup-prompt` | print agent instructions to stdout (universal fallback) |
| `lattice backfill-ids` | assign short IDs to existing tasks |
### task operations
| command | description |
|---------|-------------|
| `lattice create TITLE` | create a new task |
| `lattice status TASK STATUS` | change a task's status |
| `lattice update TASK field=value ...` | update task fields |
| `lattice assign TASK ACTOR` | assign a task |
| `lattice comment TASK TEXT` | add a comment |
| `lattice event TASK TYPE` | record a custom event (`x_` prefix) |
### querying
| command | description |
|---------|-------------|
| `lattice list` | list tasks with optional filters |
| `lattice show TASK` | detailed task info with events and relationships |
| `lattice stats` | project statistics and health |
| `lattice weather` | daily digest with assessment |
### relationships and maintenance
| command | description |
|---------|-------------|
| `lattice link SRC TYPE TGT` | create a typed relationship |
| `lattice unlink SRC TYPE TGT` | remove a relationship |
| `lattice attach TASK SOURCE` | attach a file or URL |
| `lattice archive TASK` | archive a completed task |
| `lattice unarchive TASK` | restore an archived task |
| `lattice rebuild --all` | rebuild snapshots from event logs |
| `lattice doctor [--fix]` | check and repair project integrity |
| `lattice dashboard` | launch the local web UI |
### common flags
all write commands support:
- `--actor` — who is performing the action (required)
- `--json` — structured output (`{"ok": true, "data": ...}`)
- `--quiet` — minimal output (IDs only)
- `--triggered-by`, `--on-behalf-of`, `--reason` — provenance chain
---
## development
```bash
git clone https://github.com/Stage-11-Agentics/lattice.git
cd lattice
uv venv && uv pip install -e ".[dev]"
uv run pytest
uv run ruff check src/ tests/
```
**requires:** Python 3.12+
**runtime dependencies:** `click`, `python-ulid`, `filelock` — deliberately minimal.
**optional:** `mcp` (for MCP server support)
---
## status
Lattice is **v0.1.0. alpha. actively developed.** the on-disk format and event schema are stabilizing but not yet frozen. expect breaking changes before v1.
the cost of building too early is refinement. the cost of building too late is irrelevance. one is recoverable.
## license
[MIT](LICENSE)
---
*the most impoverished vision of the future is agents replacing humans. the second most impoverished is humans constraining agents. both imagine zero-sum. both are wrong.*
*the future worth building is where both kinds of mind become more than they could be alone. neither diminished. both elevated. carbon. silicon. the emergent space between.*
*this is not metaphor. this is. architecture.*
*built by [Stage 11 Agentics](https://stage11agentic.com).*
| text/markdown | null | Stage 11 Agentics <hello@stage11agentic.com> | null | null | null | agent, ai, cli, event-sourcing, mcp, task-tracker | [
"Development Status :: 3 - Alpha",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Build Tools",
"Topic :: Software Development :: L... | [] | null | null | >=3.12 | [] | [] | [] | [
"click>=8.1",
"filelock>=3.13",
"python-ulid>=2.0",
"typing-extensions>=4.0; python_version < \"3.14\"",
"hypothesis>=6.100; extra == \"dev\"",
"mcp<2,>=1.25; extra == \"dev\"",
"mypy>=1.10; extra == \"dev\"",
"pytest-timeout>=2.3; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.4; ex... | [] | [] | [] | [
"Homepage, https://github.com/Stage-11-Agentics/lattice",
"Repository, https://github.com/Stage-11-Agentics/lattice",
"Issues, https://github.com/Stage-11-Agentics/lattice/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:22:46.851128 | lattice_tracker-0.2.0.tar.gz | 616,646 | ba/65/1bed98f896b024425330a1b2b32e24db354362cc060a1844d4c40e5e4453/lattice_tracker-0.2.0.tar.gz | source | sdist | null | false | 4ffbc55dc45059042b9f316f737afbcd | 3bb014b4f4efcd55ba9833d49d58ca2712fd9e855025477653718392112bfc12 | ba651bed98f896b024425330a1b2b32e24db354362cc060a1844d4c40e5e4453 | MIT | [
"LICENSE"
] | 232 |
2.4 | anymap-ts | 0.9.0 | A Python package for creating interactive maps with anywidget and TypeScript | # anymap-ts
A Python package for creating interactive maps with [anywidget](https://anywidget.dev/) using TypeScript. Supports multiple mapping libraries including MapLibre GL JS, Mapbox GL JS, Leaflet, OpenLayers, DeckGL, Cesium, KeplerGL, and Potree.
[](https://colab.research.google.com/github/opengeos/anymap-ts/blob/main)
[](https://notebook.link/github/opengeos/anymap-ts/)
[](https://pypi.python.org/pypi/anymap-ts)
[](https://pepy.tech/project/anymap-ts)
[](https://anaconda.org/conda-forge/anymap-ts)
[](https://anaconda.org/conda-forge/anymap-ts)
[](https://github.com/conda-forge/anymap-ts-feedstock)
[](https://www.npmjs.com/package/anymap-ts)
[](https://opensource.org/licenses/MIT)
## Supported Libraries
| Library | Description | Use Case |
|---------|-------------|----------|
| **MapLibre GL JS** | Open-source vector maps | Default, general-purpose mapping |
| **Mapbox GL JS** | Commercial vector maps | Advanced styling, 3D terrain |
| **Leaflet** | Lightweight, mobile-friendly | Simple maps, broad compatibility |
| **OpenLayers** | Feature-rich, enterprise | WMS/WMTS, projections |
| **DeckGL** | GPU-accelerated | Large-scale data visualization |
| **Cesium** | 3D globe | 3D Tiles, terrain, global views |
| **KeplerGL** | Data exploration | Interactive data analysis |
| **Potree** | Point clouds | LiDAR visualization |
## Features
- Interactive maps in Jupyter notebooks
- Bidirectional Python-JavaScript communication via anywidget
- Drawing and geometry editing with [maplibre-gl-geo-editor](https://www.npmjs.com/package/maplibre-gl-geo-editor)
- Layer control with [maplibre-gl-layer-control](https://www.npmjs.com/package/maplibre-gl-layer-control)
- Multiple basemap providers via [xyzservices](https://xyzservices.readthedocs.io/)
- Export to standalone HTML
- TypeScript-based frontend for type safety and maintainability
## Installation
### From PyPI (when published)
```bash
pip install anymap-ts
```
### From conda-forge
```bash
conda install -c conda-forge anymap-ts
```
### From source (development)
```bash
git clone https://github.com/opengeos/anymap-ts.git
cd anymap-ts
pip install -e ".[dev]"
```
### Optional dependencies
```bash
# For vector data support (GeoDataFrame)
pip install anymap-ts[vector]
# For local raster support (localtileserver)
pip install anymap-ts[raster]
# All optional dependencies
pip install anymap-ts[all]
```
## Quick Start
### MapLibre GL JS (Default)
```python
from anymap_ts import Map
# Create a map centered on a location
m = Map(center=[-122.4, 37.8], zoom=10)
m.add_basemap("OpenStreetMap")
m.add_draw_control()
m
```
### Mapbox GL JS
```python
import os
from anymap_ts import MapboxMap
# Set your Mapbox token (or use MAPBOX_TOKEN env var)
m = MapboxMap(center=[-122.4, 37.8], zoom=10)
m.add_basemap("OpenStreetMap")
m
```
### Leaflet
```python
from anymap_ts import LeafletMap
m = LeafletMap(center=[-122.4, 37.8], zoom=10)
m.add_basemap("OpenStreetMap")
m.add_marker(-122.4194, 37.7749, popup="San Francisco")
m
```
### OpenLayers
```python
from anymap_ts import OpenLayersMap
m = OpenLayersMap(center=[-122.4, 37.8], zoom=10)
m.add_basemap("OpenStreetMap")
# Add WMS layer
m.add_wms_layer(
url="https://example.com/wms",
layers="layer_name",
name="WMS Layer"
)
m
```
### DeckGL
```python
from anymap_ts import DeckGLMap
m = DeckGLMap(center=[-122.4, 37.8], zoom=10)
m.add_basemap("CartoDB.DarkMatter")
# Add scatterplot layer
points = [{"coordinates": [-122.4, 37.8], "value": 100}]
m.add_scatterplot_layer(data=points, get_radius=100)
# Add hexagon aggregation
m.add_hexagon_layer(data=points, radius=500, extruded=True)
m
```
### Cesium (3D Globe)
```python
from anymap_ts import CesiumMap
# Set CESIUM_TOKEN env var for terrain/3D Tiles
m = CesiumMap(center=[-122.4, 37.8], zoom=10)
m.add_basemap("OpenStreetMap")
m.set_terrain() # Enable Cesium World Terrain
m.fly_to(-122.4194, 37.7749, height=50000, heading=45, pitch=-45)
m
```
### KeplerGL
```python
from anymap_ts import KeplerGLMap
import pandas as pd
m = KeplerGLMap(center=[-122.4, 37.8], zoom=10)
# Add DataFrame data
df = pd.DataFrame({
'latitude': [37.7749, 37.8044],
'longitude': [-122.4194, -122.2712],
'value': [100, 200]
})
m.add_data(df, name='points')
m
```
### Potree (Point Clouds)
```python
from anymap_ts import PotreeViewer
viewer = PotreeViewer(
point_budget=1000000,
edl_enabled=True
)
viewer.load_point_cloud("path/to/pointcloud/cloud.js", name="lidar")
viewer
```
## Common Features
### Add Vector Data
```python
geojson = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {"type": "Point", "coordinates": [-122.4, 37.8]},
"properties": {"name": "San Francisco"}
}
]
}
# Works with MapLibre, Mapbox, Leaflet, OpenLayers
m.add_vector(geojson, name="points")
# Or with GeoDataFrame (requires geopandas)
import geopandas as gpd
gdf = gpd.read_file("path/to/data.geojson")
m.add_vector(gdf, name="polygons")
```
### Map Navigation
```python
# Fly to location with animation
m.fly_to(-122.4, 37.8, zoom=14)
# Fit to bounds [west, south, east, north]
m.fit_bounds([-123, 37, -122, 38])
```
### Export to HTML
```python
# All map types support HTML export
m.to_html("map.html", title="My Map")
```
## Environment Variables
| Variable | Library | Description |
|----------|---------|-------------|
| `MAPBOX_TOKEN` | Mapbox, KeplerGL | Mapbox access token |
| `CESIUM_TOKEN` | Cesium | Cesium Ion access token |
## API Reference
### Map Classes
| Class | Base Library | Key Features |
|-------|--------------|--------------|
| `Map` / `MapLibreMap` | MapLibre GL JS | Vector tiles, drawing, layer control |
| `MapboxMap` | Mapbox GL JS | 3D terrain, Mapbox styles |
| `LeafletMap` | Leaflet | Lightweight, plugins |
| `OpenLayersMap` | OpenLayers | WMS/WMTS, projections |
| `DeckGLMap` | DeckGL + MapLibre | GPU layers, aggregations |
| `CesiumMap` | Cesium | 3D globe, terrain, 3D Tiles |
| `KeplerGLMap` | KeplerGL | Data exploration UI |
| `PotreeViewer` | Potree | Point cloud visualization |
### Common Methods
| Method | Description |
|--------|-------------|
| `add_basemap(name)` | Add a basemap layer |
| `add_vector(data, name)` | Add vector data (GeoJSON/GeoDataFrame) |
| `add_geojson(data, name)` | Add GeoJSON data |
| `add_tile_layer(url, name)` | Add XYZ tile layer |
| `fly_to(lng, lat, zoom)` | Fly to location |
| `fit_bounds(bounds)` | Fit map to bounds |
| `set_visibility(layer, visible)` | Set layer visibility |
| `set_opacity(layer, opacity)` | Set layer opacity |
| `to_html(filepath)` | Export to HTML |
### DeckGL-Specific Layers
| Method | Description |
|--------|-------------|
| `add_scatterplot_layer()` | Point visualization |
| `add_arc_layer()` | Origin-destination arcs |
| `add_path_layer()` | Polylines |
| `add_polygon_layer()` | Polygons |
| `add_hexagon_layer()` | Hexbin aggregation |
| `add_heatmap_layer()` | Density heatmap |
| `add_grid_layer()` | Grid aggregation |
| `add_geojson_layer()` | GeoJSON rendering |
### Cesium-Specific Methods
| Method | Description |
|--------|-------------|
| `set_terrain()` | Enable terrain |
| `add_3d_tileset(url)` | Add 3D Tiles |
| `add_imagery_layer(url)` | Add imagery |
| `set_camera(lng, lat, height)` | Set camera position |
### Potree-Specific Methods
| Method | Description |
|--------|-------------|
| `load_point_cloud(url)` | Load point cloud |
| `set_point_budget(budget)` | Set max points |
| `add_measurement_tool(type)` | Add measurement |
| `add_annotation(position, title)` | Add annotation |
## Examples
See the `examples/` folder for Jupyter notebooks demonstrating each library:
- `maplibre.ipynb` - MapLibre GL JS basics
- `mapbox.ipynb` - Mapbox GL JS with terrain
- `leaflet.ipynb` - Leaflet markers and GeoJSON
- `openlayers.ipynb` - OpenLayers and WMS
- `deckgl.ipynb` - DeckGL visualization layers
- `cesium.ipynb` - Cesium 3D globe
- `keplergl.ipynb` - KeplerGL data exploration
- `potree.ipynb` - Potree point clouds
## Development
### Prerequisites
- Python 3.10+
- Node.js 18+
- npm
### Setup
```bash
git clone https://github.com/opengeos/anymap-ts.git
cd anymap-ts
pip install -e ".[dev]"
npm install --legacy-peer-deps
```
### Build
```bash
# Build all libraries
npm run build:all
# Build specific library
npm run build:maplibre
npm run build:mapbox
npm run build:leaflet
npm run build:deckgl
npm run build:openlayers
npm run build:cesium
# Watch mode
npm run watch
```
### Project Structure
```
anymap-ts/
├── src/ # TypeScript source
│ ├── core/ # Base classes
│ ├── maplibre/ # MapLibre implementation
│ ├── mapbox/ # Mapbox implementation
│ ├── leaflet/ # Leaflet implementation
│ ├── openlayers/ # OpenLayers implementation
│ ├── deckgl/ # DeckGL implementation
│ ├── cesium/ # Cesium implementation
│ └── types/ # Type definitions
├── anymap_ts/ # Python package
│ ├── maplibre.py # MapLibreMap class
│ ├── mapbox.py # MapboxMap class
│ ├── leaflet.py # LeafletMap class
│ ├── openlayers.py # OpenLayersMap class
│ ├── deckgl.py # DeckGLMap class
│ ├── cesium.py # CesiumMap class
│ ├── keplergl.py # KeplerGLMap class
│ ├── potree.py # PotreeViewer class
│ ├── static/ # Built JS/CSS
│ └── templates/ # HTML export templates
└── examples/ # Example notebooks
```
## License
MIT License - see [LICENSE](LICENSE) for details.
## Credits
- [MapLibre GL JS](https://maplibre.org/) - Open-source maps
- [Mapbox GL JS](https://www.mapbox.com/mapbox-gljs) - Vector maps
- [Leaflet](https://leafletjs.com/) - Lightweight maps
- [OpenLayers](https://openlayers.org/) - Feature-rich maps
- [DeckGL](https://deck.gl/) - WebGL visualization
- [Cesium](https://cesium.com/) - 3D geospatial
- [KeplerGL](https://kepler.gl/) - Data exploration
- [Potree](https://potree.github.io/) - Point cloud viewer
- [anywidget](https://anywidget.dev/) - Widget framework
| text/markdown | null | Qiusheng Wu <giswqs@gmail.com> | null | null | null | anywidget, geospatial, gis, jupyter, maplibre, maps, typescript | [
"Development Status :: 4 - Beta",
"Framework :: Jupyter",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Prog... | [] | null | null | >=3.10 | [] | [] | [] | [
"anywidget>=0.9.0",
"traitlets>=5.0.0",
"xyzservices>=2023.10.0",
"geopandas>=0.14.0; extra == \"all\"",
"localtileserver>=0.10.6; extra == \"all\"",
"matplotlib>=3.8.0; extra == \"all\"",
"shapely>=2.0.0; extra == \"all\"",
"mypy>=1.8.0; extra == \"dev\"",
"pre-commit>=3.6.0; extra == \"dev\"",
"... | [] | [] | [] | [
"Homepage, https://github.com/opengeos/anymap-ts",
"Documentation, https://ts.anymap.dev",
"Repository, https://github.com/opengeos/anymap-ts",
"Issues, https://github.com/opengeos/anymap-ts/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T05:22:35.915918 | anymap_ts-0.9.0.tar.gz | 180,171 | 41/54/ce20b7519b217d54832ba9ca65f50d1803ac29f1681c97e93271c7b658bc/anymap_ts-0.9.0.tar.gz | source | sdist | null | false | 602aae73ecaf3a529c20c778429de7d6 | 59b61b7568577369c9a7a50d3d7d36593f4f420fc2c68ba888b6871ec5756764 | 4154ce20b7519b217d54832ba9ca65f50d1803ac29f1681c97e93271c7b658bc | MIT | [
"LICENSE"
] | 243 |
2.4 | clig | 0.6.3 | Command Line Interface Generator | # `clig` - CLI Generator
A single module, pure python, **Command Line Interface Generator**.
OBS: currently under development.
## Installation
```shell
pip install clig
```
# User guide
`clig` is a single module, written in pure python, that wraps around the
_stdlib_ module [`argparse`](https://docs.python.org/3/library/argparse.html) to
generate command line interfaces through simple functions.
If you know how to use
[`argparse`](https://docs.python.org/3/library/argparse.html), you may want to
use `clig`.
## Basic usage
Create or import some function and call `clig.run()` with it:
```python
# example01.py
import clig
def printperson(name, title="Mister"):
print(f"{title} {name}")
clig.run(printperson)
```
In general, the function arguments that have a "default" value are turned into
optional _flagged_ (`--`) command line arguments, while the "non default" will
be positional arguments.
```
> python example01.py -h
usage: printperson [-h] [--title TITLE] name
positional arguments:
name
options:
-h, --help show this help message and exit
--title TITLE
```
The script can then be used in the same way as used with
[`argparse`](https://docs.python.org/3/library/argparse.html):
```
> python example01.py John
Mister John
```
```
> python example01.py Maria --title Miss
Miss Maria
```
You can also pass arguments in code (like with the original
[`parse_args()`](https://docs.python.org/3/library/argparse.html#the-parse-args-method)
method)
```python
>>> import clig
>>> def printperson(name, title="Mister"):
... print(f"{title} {name}")
...
>>> clig.run(printperson, ["Isaac", "--title", "Sir"])
Sir Isaac
```
The `run()` function accepts
[other arguments to customize the interface](./docs/sphinx/source/notebooks/advancedfeatures.md#parameters-for-cligrun-function)
## Helps
Arguments and command Helps are taken from the docstring when possible:
```python
# example02.py
import clig
def greetings(name, greet="Hello"):
"""Description of the command: A greeting prompt!
Args:
name: The name to greet
greet: The greeting used. Defaults to "Hello".
"""
print(f"Greetings: {greet} {name}!")
clig.run(greetings)
```
```
> python example02.py --help
usage: greetings [-h] [--greet GREET] name
Description of the command: A greeting prompt!
positional arguments:
name The name to greet
options:
-h, --help show this help message and exit
--greet GREET The greeting used. Defaults to "Hello".
```
There is an internal list of docstring templates from which you can choose if
the inferred docstring is not correct. It is also possible to specify your own
custom docstring template.
## Argument inference
Based on [type annotations](https://docs.python.org/3/library/typing.html), some
arguments can be inferred from the function signature to pass data to the
original
[`add_argument()`](https://docs.python.org/3/library/argparse.html#the-add-argument-method)
method:
```python
# example03.py
import clig
def recordperson(name: str, age: int, height: float):
print(locals())
clig.run(recordperson)
```
The types in the annotation may be used in the
[`add_argument()`](https://docs.python.org/3/library/argparse.html#the-add-argument-method)
method as [`type`](https://docs.python.org/3/library/argparse.html#type) keyword
argument, when possible:
```
> python example03.py John 37 1.73
{'name': 'John', 'age': 37, 'height': 1.73}
```
And the type conversions are performed as usual
```
> python example03.py Mr John Doe
usage: recordperson [-h] name age height
recordperson: error: argument age: invalid int value: 'John'
```
### Booleans
Booleans are transformed into arguments with
[`action`](https://docs.python.org/3/library/argparse.html#action) of kind
`"store_true"` or `"store_false"` (depending on the default value).
```python
# example04.py
import clig
def recordperson(name: str, employee: bool = False):
print(locals())
clig.run(recordperson)
```
```
> python example04.py -h
usage: recordperson [-h] [--employee] name
positional arguments:
name
options:
-h, --help show this help message and exit
--employee
```
```
> python example04.py --employee Leo
{'name': 'Leo', 'employee': True}
```
```
> python example04.py Ana
{'name': 'Ana', 'employee': False}
```
#### Required booleans
If no default is given to the boolean, a
[`required=True`](https://docs.python.org/3/library/argparse.html#required)
keyword argument is used in the
[`add_argument()`](https://docs.python.org/3/library/argparse.html#the-add-argument-method)
method and a
[`BooleanOptionalAction`](https://docs.python.org/3/library/argparse.html#argparse.BooleanOptionalAction)
is used as [`action`](https://docs.python.org/3/library/argparse.html#action)
keyword argument, adding support for a boolean complement action in the form
`--no-option`:
```python
# example05.py
import clig
def recordperson(name: str, employee: bool):
print(locals())
clig.run(recordperson)
```
```
> python example05.py -h
usage: recordperson [-h] --employee | --no-employee name
positional arguments:
name
options:
-h, --help show this help message and exit
--employee, --no-employee
```
```
> python example05.py Ana
usage: recordperson [-h] --employee | --no-employee name
recordperson: error: the following arguments are required: --employee/--no-employee
```
### Tuples, Lists and Sequences: [`nargs`](https://docs.python.org/3/library/argparse.html#nargs)
The original [`nargs`](https://docs.python.org/3/library/argparse.html#nargs)
keyword argument associates a different number of command-line arguments with a
single action. This is inferrend in types using `tuple`, `list` and `Sequence`.
#### Tuples
If the type is a `tuple` of specified length `N`, the argument automatically
uses `nargs=N`.
```python
# example06.py
import clig
def main(name: tuple[str, str]):
print(locals())
clig.run(main)
```
```
> python example06.py -h
usage: main [-h] name name
positional arguments:
name
options:
-h, --help show this help message and exit
```
```
> python example06.py rocky yoco
{'name': ('rocky', 'yoco')}
```
```
> python example06.py rocky
usage: main [-h] name name
main: error: the following arguments are required: name
```
The argument can be positional (required, as above) or optional (with a
default).
```python
# example07.py
import clig
def main(name: tuple[str, str, str] = ("john", "mary", "jean")):
print(locals())
clig.run(main)
```
```
> python example07.py
{'name': ('john', 'mary', 'jean')}
```
```
> python example07.py --name yoco
usage: main [-h] [--name NAME NAME NAME]
main: error: argument --name: expected 3 arguments
```
```
> python example07.py --name yoco rocky sand
{'name': ('yoco', 'rocky', 'sand')}
```
#### List, Sequences and Tuples of any length
If the type is a generic `Sequence`, a `list` or a `tuple` of _any_ length
(i.e., `tuple[<type>, ...]`), it uses
[`nargs="+"`](https://docs.python.org/3/library/argparse.html#nargs) if it is
required (non default value) or
[`nargs="*"`](https://docs.python.org/3/library/argparse.html#nargs) if it is
not required (has a default value).
```python
# example08.py
import clig
def main(names: list[str]):
print(locals())
clig.run(main)
```
In this example, we have `names` using
[`nargs="+"`](https://docs.python.org/3/library/argparse.html#nargs)
```
> python example08.py -h
usage: main [-h] names [names ...]
positional arguments:
names
options:
-h, --help show this help message and exit
```
```
> python example08.py chester philip
{'names': ['chester', 'philip']}
```
```
> python example08.py
usage: main [-h] names [names ...]
main: error: the following arguments are required: names
```
In the next example, we have `names` as optional argument, using `nargs="*"`
```python
# example09.py
import clig
def main(names: list[str] | None = None):
print(locals())
clig.run(main)
```
```
> python example09.py -h
usage: main [-h] [--names [NAMES ...]]
options:
-h, --help show this help message and exit
--names [NAMES ...]
```
```
> python example09.py --names katy buba
{'names': ['katy', 'buba']}
```
```
> python example09.py
{'names': None}
```
### Literals and Enums: [`choices`](https://docs.python.org/3/library/argparse.html#choices)
If the type is a `Literal` or a `Enum` the argument automatically uses
[`choices`](https://docs.python.org/3/library/argparse.html#choices).
```python
# example10.py
from typing import Literal
import clig
def main(name: str, move: Literal["rock", "paper", "scissors"]):
print(locals())
clig.run(main)
```
```
> python example10.py -h
usage: main [-h] name {rock,paper,scissors}
positional arguments:
name
{rock,paper,scissors}
options:
-h, --help show this help message and exit
```
As is expected in [`argparse`](https://docs.python.org/3/library/argparse.html),
an error message will be displayed if the argument was not one of the acceptable
values:
```
> python example10.py John knife
usage: main [-h] name {rock,paper,scissors}
main: error: argument move: invalid choice: 'knife' (choose from rock, paper, scissors)
```
```
> python example10.py Mary paper
{'name': 'Mary', 'move': 'paper'}
```
#### Passing Enums
In the command line, `Enum` should be passed by name, regardless of if it is a
number Enum or ar string Enum
```python
# example11.py
from enum import Enum, StrEnum
import clig
class Color(Enum):
red = 1
blue = 2
yellow = 3
class Statistic(StrEnum):
minimun = "minimun"
mean = "mean"
maximum = "maximum"
def main(color: Color, statistic: Statistic):
print(locals())
clig.run(main)
```
```
> python example11.py -h
usage: main [-h] {red,blue,yellow} {minimun,mean,maximum}
positional arguments:
{red,blue,yellow}
{minimun,mean,maximum}
options:
-h, --help show this help message and exit
```
It is correctly passed to the function
```
> python example11.py red mean
{'color': <Color.red: 1>, 'statistic': <Statistic.mean: 'mean'>}
```
```
> python example11.py green
usage: main [-h] {red,blue,yellow} {minimun,mean,maximum}
main: error: argument color: invalid choice: 'green' (choose from red, blue, yellow)
```
#### Literal with Enum
You can even mix `Enum` and `Literal`, following the
[`Literal` specification](https://typing.python.org/en/latest/spec/literal.html#legal-parameters-for-literal-at-type-check-time)
```python
# example12.py
from typing import Literal
from enum import Enum
import clig
class Color(Enum):
red = 1
blue = 2
yellow = 3
def main(color: Literal[Color.red, "green", "black"]):
print(locals())
clig.run(main)
```
```
> python example12.py red
{'color': <Color.red: 1>}
```
```
> python example12.py green
{'color': 'green'}
```
### Variadic arguments (`*args` and `**kwargs`): [Partial parsing](https://docs.python.org/3/library/argparse.html#partial-parsing)
When the function has variadic arguments in the form `*args` or `**kwargs`, the
[parse_known_args()](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.parse_known_args)
method will be used internally to gather unspecified arguments:
```python
>>> import clig
>>> def variadics(foo: str, *args, **kwargs):
... print(locals())
...
>>> clig.run(variadics, "bar badger BAR spam --name adam --title mister".split())
{'foo': 'bar', 'args': ('badger', 'BAR', 'spam'), 'kwargs': {'name': 'adam', 'title': 'mister'}}
```
#### `*args`
For
[arbitrary arguments in the form `*args`](https://docs.python.org/3/tutorial/controlflow.html?utm_source=chatgpt.com#arbitrary-argument-lists),
the unspecified arguments will be wrapped up in a tuple of strings, by default.
If there is a type annotation, the conversion is made in the whole tuple:
```python
>>> import clig
>>> def variadicstyped(number: float, *integers: int):
... print(locals())
...
>>> clig.run(variadicstyped, ["36.7", "1", "2", "3", "4", "5"])
{'number': 36.7, 'integers': (1, 2, 3, 4, 5)}
```
#### `**kwargs`
For
[arbitrary keyword arguments in the form `**kwargs`](https://docs.python.org/3/tutorial/controlflow.html?utm_source=chatgpt.com#keyword-arguments),
the unspecified arguments will be wrapped up in a dictionary of strings by
default. The keys of the dictionary are the names used with the option delimiter
in the command line (usually `-` or `--`). If there are more than one value for
each option, they are gathered in a list:
```python
# example13.py
import clig
def foobar(name: str, **kwargs):
print(locals())
clig.run(foobar)
```
```
> python example13.py joseph --nickname joe --uncles jack jean adam
{'name': 'joseph', 'kwargs': {'nickname': 'joe', 'uncles': ['jack', 'jean', 'adam']}}
```
If there is a type annotation, the conversion is made in all elements of the
dictionary
```python
# example14.py
import clig
def foobartyped(name: str, **intergers: int):
print(locals())
clig.run(foobartyped)
```
```
> python example14.py joseph --age 23 --numbers 25 27 30
{'name': 'joseph', 'intergers': {'age': 23, 'numbers': [25, 27, 30]}}
```
```
> python example14.py joseph --age 23 --numbers jack jean adam
ValueError: invalid literal for int() with base 10: 'jack'
```
#### Error when passing _flagged_ arguments to `*args`
The flag delimiters (usually `-` or `--`,
[which can be changed](https://docs.python.org/3/library/argparse.html#prefix-chars))
are always interpreted as prefix for keyword arguments, raising the correct
error when not allowed:
```python
# example15.py
import clig
def bazham(name: str, *uncles: str):
print(locals())
clig.run(bazham)
```
```
> python example15.py joseph jack john
{'name': 'joseph', 'uncles': ('jack', 'john')}
```
```
> python example15.py joseph --uncles jack john
TypeError: bazham() got an unexpected keyword argument 'uncles'
```
## Argument specification
In some complex cases supported by
[`argparse`](https://docs.python.org/3/library/argparse.html), the arguments may
not be completely inferred by `clig.run()` on the function signature.
In theses cases, you can directly specificy the arguments parameters using the
[`Annotated`](https://docs.python.org/3/library/typing.html#typing.Annotated)
typing (or its `clig`'s alias `Arg`) with its "metadata" created with the
`data()` function.
The `data()` function accepts all possible arguments of the original
[`add_argument()`](https://docs.python.org/3/library/argparse.html#the-add-argument-method)
method:
### name or flags
The
[`name_or_flags`](https://docs.python.org/3/library/argparse.html#name-or-flags)
parameter can be used to define additional flags for the arguments, like `-f` or
`--foo`:
```python
# example16.py
from clig import Arg, data, run
def main(foobar: Arg[str, data("-f", "--foo")] = "baz"):
print(locals())
run(main)
```
```
> python example16.py -h
usage: main [-h] [-f FOOBAR]
options:
-h, --help show this help message and exit
-f FOOBAR, --foo FOOBAR
```
[`name or flags`](https://docs.python.org/3/library/argparse.html#name-or-flags)
can also be used to turn a positional argument (without default) into a
[`required`](https://docs.python.org/3/library/argparse.html#required) flagged
argument (a _required option_):
```python
# example17.py
from clig import Arg, data, run
def main(foo: Arg[str, data("-f")]):
print(locals())
run(main)
```
```
> python example17.py -h
usage: main [-h] -f FOO
options:
-h, --help show this help message and exit
-f FOO, --foo FOO
```
```
> python example17.py
usage: main [-h] -f FOO
main: error: the following arguments are required: -f/--foo
```
**Note**:
As you can see above, `clig` tries to create a _long flag_ (`--`) for the
argument when only _short flags_ (`-`) are defined (but not when long flags are
already defined). However,
[this behavior can be disabled](./docs/sphinx/source/notebooks/advancedfeatures.md).
Some options for the
[`name or flags`](https://docs.python.org/3/library/argparse.html#name-or-flags)
parameter can also be set in the `run()` function
### nargs
Other cases of [`nargs`](https://docs.python.org/3/library/argparse.html#nargs)
can be specified in the `data()` function.
The next example uses an optional argument with
[`nargs="?"`](https://docs.python.org/3/library/argparse.html#nargs) and
[`const`](https://docs.python.org/3/library/argparse.html#const), which brings 3
different behaviors for the optional argument:
- value passed
- value not passed (sets default value)
- option passed without value (sets const value):
```python
>>> from clig import Arg, data, run
...
>>> def main(foo: Arg[str, data(nargs="?", const="c")] = "d"):
... print(locals())
...
>>> run(main, ["--foo", "YY"])
{'foo': 'YY'}
>>> run(main, [])
{'foo': 'd'}
>>> run(main, ["--foo"])
{'foo': 'c'}
```
The next example makes optional a positional argument (not flagged), by using
[`nargs="?"`](https://docs.python.org/3/library/argparse.html#nargs) and
[`default`](https://docs.python.org/3/library/argparse.html#default) (which
would default to `None`):
```python
>>> from clig import Arg, data, run
>>> def main(foo: Arg[str, data(nargs="?", default="d")]):
... print(locals())
...
>>> run(main, ["YY"])
{'foo': 'YY'}
>>> run(main, [])
{'foo': 'd'}
```
### action
Other options for the
[`action`](https://docs.python.org/3/library/argparse.html#action) parameter can
also be used in the `data()` function:
```python
>>> from clig import Arg, data, run
>>> def append(foo: Arg[list[str], data(action="append")] = ["0"]):
... print(locals())
...
>>> def append_const(bar: Arg[list[int], data(action="append_const", const=42)] = [42]):
... print(locals())
...
>>> def extend(baz: Arg[list[float], data(action="extend")] = [0]):
... print(locals())
...
>>> def count(ham: Arg[int, data(action="count")] = 0):
... print(locals())
...
>>> run(append, "--foo 1 --foo 2".split())
{'foo': ['0', '1', '2']}
...
>>> run(append_const, "--bar --bar --bar --bar".split())
{'bar': [42, 42, 42, 42, 42]}
...
>>> run(extend, "--baz 25 --baz 50 65 75".split())
{'baz': [0, 25.0, 50.0, 65.0, 75.0]}
...
>>> run(count, "--ham --ham --ham".split())
{'ham': 3}
```
### metavar
The parameter
[`metavar`](https://docs.python.org/3/library/argparse.html#metavar) is used to
set alternative names in help messages to refer to arguments. By default, they
would be referend as just the argument name, if positional, and the argument
name uppercased, if optional.
```python
# example18.py
from clig import Arg, data, run
def main(ham: Arg[str, data(metavar="YYY")], foo: Arg[str, data("-f", metavar="<foobar>")]):
print(locals())
run(main)
```
```
> python example18.py -h
usage: main [-h] -f <foobar> YYY
positional arguments:
YYY
options:
-h, --help show this help message and exit
-f <foobar>, --foo <foobar>
```
Some options for the
[`metavar`](https://docs.python.org/3/library/argparse.html#metavar) argument
[can also be set in the `run()` function](./docs/sphinx/source/notebooks/advancedfeatures.md#metavar-modifiers).
### help
It is more convenient to specify [helps for arguments in the docstring](#helps).
However, you can define helps using the `data()` function in the same way as in
the original method
[`add_argument()`](https://docs.python.org/3/library/argparse.html#the-add-argument-method).
Helps passed in the `data()` function takes precedence.
```python
# example19.py
from clig import Arg, data, run
def mycommand(number: Arg[int, data(help="a different help for the number")]):
"""Description of the command
Args:
number: a number to compute
"""
pass
run(mycommand)
```
```
> python example19.py -h
usage: mycommand [-h] number
Description of the command
positional arguments:
number a different help for the number
options:
-h, --help show this help message and exit
```
Some options for the
[`help`](https://docs.python.org/3/library/argparse.html#help) argument
[can also be set in the `run()` function](./docs/sphinx/source/notebooks/advancedfeatures.md#help-modifiers).
## Argument groups
The
[`argparse`](https://docs.python.org/3/library/argparse.html#module-argparse)
module has features of
[argument groups](https://docs.python.org/3/library/argparse.html#argument-groups)
and
[mutually exclusive argument groups](https://docs.python.org/3/library/argparse.html#mutual-exclusion).
These features can be used in `clig` with 2 additional classes: `ArgumentGroup`
and `MutuallyExclusiveGroup`.
The object created with these classes can be used in the `group` parameter of
the `data()` function.
Each class accepts all the parameters of the original methods
[`add_argument_group()`](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.add_argument_group)
and
[`add_mutually_exclusive_group()`](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.add_mutually_exclusive_group).
```python
# example20.py
from clig import Arg, data, run, ArgumentGroup
g = ArgumentGroup(title="Group of arguments", description="This is my group of arguments")
def main(foo: Arg[str, data(group=g)], bar: Arg[int, data(group=g)] = 42):
print(locals())
run(main)
```
```
> python example20.py -h
usage: main [-h] [--bar BAR] foo
options:
-h, --help show this help message and exit
Group of arguments:
This is my group of arguments
foo
--bar BAR
```
Remember that mutually exclusive arguments
[must be optional](https://github.com/python/cpython/blob/7168553c00767689376c8dbf5933a01af87da3a4/Lib/argparse.py#L1805)
(either by using a flag in the `data` function, or by setting a deafult value):
```python
# example21.py
from clig import Arg, data, run, MutuallyExclusiveGroup
g = MutuallyExclusiveGroup()
def main(foo: Arg[str, data("-f", group=g)], bar: Arg[int, data(group=g)] = 42):
print(locals())
run(main)
```
```
> python example21.py --foo rocky --bar 23
usage: main [-h] [-f FOO | --bar BAR]
main: error: argument --bar: not allowed with argument -f/--foo
```
### Required mutually exclusive group
A `required` argument is accepted by the `MutuallyExclusiveGroup` in the same
way it is done with the original method
[`add_mutually_exclusive_group()`](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.add_mutually_exclusive_group)
(to indicate that at least one of the mutually exclusive arguments is required):
```python
# example22.py
from clig import Arg, data, run, MutuallyExclusiveGroup
g = MutuallyExclusiveGroup(required=True)
def main(foo: Arg[str, data(group=g)] = "baz", bar: Arg[int, data(group=g)] = 42):
print(locals())
run(main)
```
```
> python example22.py -h
usage: main [-h] (--foo FOO | --bar BAR)
options:
-h, --help show this help message and exit
--foo FOO
--bar BAR
```
```
> python example22.py
usage: main [-h] (--foo FOO | --bar BAR)
main: error: one of the arguments --foo --bar is required
```
### Mutually exclusive group added to an argument group
The `MutuallyExclusiveGroup` constructor class also accepts an additional
`argument_group` parameter, because
[a mutually exclusive group can be added to an argument group](https://github.com/python/cpython/blob/920286d6b296f9971fc79e14ec22966f8f7a7b90/Doc/library/argparse.rst?plain=1#L2028-L2029).
```python
# example23.py
from clig import Arg, data, run, ArgumentGroup, MutuallyExclusiveGroup
ag = ArgumentGroup(title="Group of arguments", description="This is my group")
meg = MutuallyExclusiveGroup(argument_group=ag)
def main(
foo: Arg[str, data(group=meg)] = "baz",
bar: Arg[int, data(group=meg)] = 42,
):
print(locals())
run(main)
```
```
> python example23.py -h
usage: main [-h] [--foo FOO | --bar BAR]
options:
-h, --help show this help message and exit
Group of arguments:
This is my group
--foo FOO
--bar BAR
```
However, you can define just the `MutuallyExclusiveGroup` object passing the
parameters of `ArgumentGroup` to the constructor of the former class, which
supports they:
```python
# example24.py
from clig import Arg, data, run, MutuallyExclusiveGroup
g = MutuallyExclusiveGroup(
title="Group of arguments",
description="This is my exclusive group of arguments",
)
def main(
foo: Arg[str, data("-f", group=g)],
bar: Arg[int, data("-b", group=g)],
):
print(locals())
run(main)
```
```
> python example24.py -h
usage: main [-h] [-f FOO | -b BAR]
options:
-h, --help show this help message and exit
Group of arguments:
This is my exclusive group of arguments
-f FOO, --foo FOO
-b BAR, --bar BAR
```
### The walrus operator (`:=`)
You can do argument group definition all in one single line (in the function
declaration) by using the
[walrus operator](https://docs.python.org/3/reference/expressions.html#assignment-expressions)
(`:=`):
```python
# example25.py
from clig import Arg, data, run, MutuallyExclusiveGroup
def main(
foo: Arg[str, data(group=(g := MutuallyExclusiveGroup(title="My group")))] = "baz",
bar: Arg[int, data(group=g)] = 42,
):
print(locals())
run(main)
```
```
> python example25.py -h
usage: main [-h] [--foo FOO | --bar BAR]
options:
-h, --help show this help message and exit
My group:
--foo FOO
--bar BAR
```
## Subcommands
Instead of using the function `clig.run()`, you can create an object instance of
the type `Command`, passing your function to its constructor, and call the
`Command.run()` method.
```python
# example26.py
from clig import Command
def main(name:str, age: int, height: float):
print(locals())
cmd = Command(main)
cmd.run()
```
```
> python example26.py "Carmem Miranda" 42 1.85
{'name': 'Carmem Miranda', 'age': 42, 'height': 1.85}
```
This makes it possible to use some methods to add
[subcommands](https://docs.python.org/3/library/argparse.html#sub-commands). All
subcommands will also be instances of the same class `Command`. There are 4 main
methods available:
- `new_subcommand`: Creates a subcommand and returns the new created `Command`
instance.
- `add_subcommand`: Creates the subcommand and returns the caller object. This
is useful to add multiple subcommands in one single line.
- `end_subcommand`: Creates the subcommand and returns the parent of the caller
object. If the caller doesn't have a parent, an error will be raised. This is
useful when finishing to add subcommands in the object on a single line.
- `subcommand`: Creates the subcommand and returns the input function unchanged.
This is a proper method to be used as a
[function decorator](https://docs.python.org/3/glossary.html#term-decorator).
There are also [2 module functions](#subcommands-using-function-decorators):
`command()` and `subcommand()`. They also returns the functions unchanged, and
so may also be used as decorators.
The functions declared as commands execute sequentially, from a `Command` to its
subcommands.
The `Command()` constructor also accepts other arguments to customize the
interface, and also has other methods, like `print_help()`, analog to the
[original method](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.print_help)
### Subcommands using methods
The methods `new_subcommand` and `add_subcommand` can be used to add subcommands
in an usual object oriented code.
Consider the case below, with 2 levels of subcommands:
```
prog
├─── subfunction1
└─── subfunction2
└─── subsubfunction
```
You can create the main command object and add subcommands to it after:
```python
>>> from clig import Command
>>> def prog(name: str, age: int):
... print(locals())
...
>>> def subfunction1(height: float):
... print(locals())
...
>>> def subfunction2(father: str, mother: str):
... print(locals())
...
>>> def subsubfunction(city: str, state: str):
... print(locals())
...
>>> cmd = Command(prog) # defines the main object
>>> cmd.add_subcommand(subfunction1) # adds a subcommand to the main object
>>> sub = cmd.new_subcommand(subfunction2) # adds and returns a new created subcommand object
>>> sub.add_subcommand(subsubfunction) # adds a subcommand to the subcommand object
...
>>> cmd.print_help() # main command help
usage: prog [-h] name age {subfunction1,subfunction2} ...
positional arguments:
name
age
options:
-h, --help show this help message and exit
subcommands:
{subfunction1,subfunction2}
subfunction1
subfunction2
```
Subcommands are correctly handled with their
[subparsers](https://docs.python.org/3/library/argparse.html#argparse.ArgumentParser.add_subparsers).
```python
>>> sub.print_help() # subcommand help
usage: prog name age subfunction2 [-h] father mother {subsubfunction} ...
positional arguments:
father
mother
options:
-h, --help show this help message and exit
subcommands:
{subsubfunction}
subsubfunction
```
Remember that the command functions execute sequentially, from a `Command` to
its subcommands.
```python
>>> # run the main comand with all subcommands
>>> cmd.run("jack 23 subfunction2 michael suzan subsubfunction santos SP".split())
{'name': 'jack', 'age': 23}
{'father': 'michael', 'mother': 'suzan'}
{'city': 'santos', 'state': 'SP'}
...
>>> # run the subcommand with its subcommand
>>> sub.run(["jean", "karen", "subsubfunction", "campos", "RJ"])
{'father': 'jean', 'mother': 'karen'}
{'city': 'campos', 'state': 'RJ'}
```
To access the attributes of a command inside the functions of its subcommands,
check out the feature of the
[`Context`](./docs/sphinx/source/notebooks/advancedfeatures.md#context) object.
#### All CLI in one statement
Using the 3 methods `new_subcommand`, `add_subcommand` and `end_subcommand` you
can define the whole interface in one single statement (one line of code).
To give a clear example, consider the [Git](https://git-scm.com/) cli interface.
Some of its command's hierarchy is the following:
```
git
├─── status
├─── commit
├─── remote
│ ├─── add
│ ├─── rename
│ └─── remove
└─── submodule
├─── init
└─── update
```
Then, the functions could be declared in the following structure, with the CLI
definition at the end:
```python
# example27.py
from inspect import getframeinfo, currentframe
from pathlib import Path
from clig import Command
def git(exec_path: Path = Path("git"), work_tree: Path = Path("C:/Users")):
"""The git command line interface"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
def status(branch: str):
"""Show the repository status"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
def commit(message: str):
"""Record changes to the repository"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
def remote(verbose: bool = False):
"""Manage remote repositories"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
def add(name: str, url: str):
"""Add a new remote"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
def rename(old: str, new: str):
"""Rename an existing remote"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
def remove(name: str):
"""Remove the remote reference"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
def submodule(quiet: bool):
"""Manages git submodules"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
def init(path: Path = Path(".").resolve()):
"""Initialize the submodules recorded in the index"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
def update(init: bool, path: Path = Path(".").resolve()):
"""Update the registered submodules"""
print(f"{getframeinfo(currentframe()).function} {locals()}")
######################################################################
# The whole interface is built in the code below
# It could also be placed in a separated file importing the functions
(
Command(git)
.add_subcommand(status)
.add_subcommand(commit)
.new_subcommand(remote)
.add_subcommand(add)
.add_subcommand(rename)
.end_subcommand(remove)
.new_subcommand(submodule)
.add_subcommand(init)
.end_subcommand(update)
.run()
)
```
Help for the main command:
```
> python example27.py -h
usage: git [-h] [--exec-path EXEC_PATH] [--work-tree WORK_TREE]
{status,commit,remote,submodule} ...
The git command line interface
options:
-h, --help show this help message and exit
--exec-path EXEC_PATH
--work-tree WORK_TREE
subcommands:
{status,commit,remote,submodule}
status Show the repository status
commit Record changes to the repository
remote Manage remote repositories
submodule Manages git submodules
```
Help for the `remote` subcomand:
```
> python example27.py remote -h
usage: git remote [-h] [--verbose] {add,rename,remove} ...
Manage remote repositories
options:
-h, --help show this help message and exit
--verbose
subcommands:
{add,rename,remove}
add Add a new remote
rename Rename an existing remote
remove Remove the remote reference
```
Help for the `remote rename` subcommand:
```
> python example27.py remote rename -h
usage: git remote rename [-h] old new
Rename an existing remote
positional arguments:
old
new
options:
-h, --help show this help message and exit
```
Remember: the command functions execute sequentially, from a `Command` to its
subcommands.
```
> python example27.py remote rename oldName newName
git {'exec_path': WindowsPath('git'), 'work_tree': WindowsPath('C:/Users')}
remote {'verbose': False}
rename {'old': 'oldName', 'new': 'newName'}
```
### Subcommands using method decorators
You can define subcommands using the `subcommand()` method as decorator. To do
it, first, create a `Command` instance. The decorator only registries the
functions as commands (it doesn't change their definitions).
```python
# example28.py
from clig import Command
def main(verbose: bool = False):
"""Description for the main command"""
print(f"{locals()}")
cmd = Command(main) # create the command object
@cmd.subcommand
def foo(a, b):
"""Help for foo sub command"""
print(f"{locals()}")
@cmd.subcommand
def bar(c, d):
"""Help for bar sub command"""
print(f"{locals()}")
cmd.run()
```
```
> python example28.py -h
usage: main [-h] [--verbose] {foo,bar} ...
Description for the main command
options:
-h, --help show this help message and exit
--verbose
subcommands:
{foo,bar}
foo Help for foo sub command
bar Help for bar sub command
```
**Note:**
The `cmd` object in the example above could also be created
[without a function](./docs/sphinx/source/notebooks/advancedfeatures.md#calling-cligcommand-without-a-function)
(i.e., `cmd = Command()`)
You could also use de `Command()` constructor as a
[decorator](https://docs.python.org/3/glossary.html#term-decorator). However,
that would redefine the function name as a `Command` instance.
```python
>>> from clig import Command
>>> def main():
... pass
...
>>> cmd = Command(main) # the `main` function is not affected with this
>>> print(type(main))
<class 'function'>
...
>>> @Command
>>> def main():
... pass
...
>>> print(type(main)) # now the main function is a `Command` instance
<class 'clig.clig.Command'>
```
Futhermore, by using decorators without arguments, the functions are not
modified but you won't be able to define more than one level of subcommands,
[unless you pass an argument to the decorators](./docs/sphinx/source/notebooks/advancedfeatures.md#method-decorator-with-argument).
### Subcommands using function decorators
As you may notice in the previous example, using decorators without arguments,
(which do not modify functions definitions) does not allow you to declare more
than one level of subcommands.
For these cases, it is more convenient to use the module level functions
`clig.command()` and `clig.subcommand()` as decorators, because they don't
require to define a `Command` object:
```python
# example29.py
from clig import command, subcommand, run
@command
def main(verbose: bool = False):
"""Description for the main command"""
print(locals())
@subcommand
def foo(a, b):
"""Help for foo sub command"""
print(locals())
@subcommand
def bar(c, d):
"""Help for bar sub command"""
print(locals())
run()
```
```
> python example29.py -h
usage: main [-h] [--verbose] {foo,bar} ...
Description for the main command
options:
-h, --help show this help message and exit
--verbose
subcommands:
{foo,bar}
foo Help for foo sub command
bar Help for bar sub command
```
However, to define more than one level of subcommands using these function
decorators, you can also
[pass arguments to the functions](./docs/sphinx/source/notebooks/advancedfeatures.md#method-decorator-with-argument),
in a similar way as
[passing an argument to the methods decorators](./docs/sphinx/source/notebooks/advancedfeatures.md#function-decorator-with-argument),
as discussed in the
[Advanced Features](./docs/sphinx/source/notebooks/advancedfeatures.md).
| text/markdown | null | Diogo Rossi <rossi.diogo@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy"
] | [] | null | null | >=3.12 | [] | [] | [] | [] | [] | [] | [] | [
"Documentation, https://github.com/diogo-rossi/clig",
"Issues, https://github.com/diogo-rossi/clig/issues",
"Source, https://github.com/diogo-rossi/clig"
] | uv/0.9.24 {"installer":{"name":"uv","version":"0.9.24","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T05:21:55.766414 | clig-0.6.3-py3-none-any.whl | 22,611 | 09/ac/bcfa4683f59777bfbde02422d1a4c4b9d513f4374f99fbe404d22b3d51be/clig-0.6.3-py3-none-any.whl | py3 | bdist_wheel | null | false | ce35b1f8d271d2a7496b3220ceb7ee52 | 08421764a3a037cebcf946be979a8f83bd37aa65118f13d6a6718e885d3a39b7 | 09acbcfa4683f59777bfbde02422d1a4c4b9d513f4374f99fbe404d22b3d51be | MIT | [
"LICENSE.txt"
] | 248 |
2.4 | contex-python | 0.2.2 | Official Python SDK for Contex - Semantic context routing for AI agents | # Contex Python SDK
Official Python client for [Contex](https://github.com/cahoots-org/contex) - Semantic context routing for AI agents.
## Installation
```bash
pip install contex-python
```
## Quick Start
### Async Client (Recommended)
```python
from contex import ContexAsyncClient
async def main():
async with ContexAsyncClient(
url="http://localhost:8001",
api_key="ck_your_api_key_here"
) as client:
# Publish data
await client.publish(
project_id="my-app",
data_key="coding_standards",
data={
"style": "PEP 8",
"max_line_length": 100,
"quotes": "double"
}
)
# Register agent
response = await client.register_agent(
agent_id="code-reviewer",
project_id="my-app",
data_needs=[
"coding standards and style guidelines",
"testing requirements and coverage goals"
]
)
print(f"Matched needs: {response.matched_needs}")
print(f"Notification channel: {response.notification_channel}")
# Query for data
results = await client.query(
project_id="my-app",
query="authentication configuration"
)
for result in results.results:
print(f"{result.data_key}: {result.data}")
import asyncio
asyncio.run(main())
```
### Sync Client
```python
from contex import ContexClient
client = ContexClient(
url="http://localhost:8001",
api_key="ck_your_api_key_here"
)
# Publish data
client.publish(
project_id="my-app",
data_key="config",
data={"env": "prod", "debug": False}
)
# Register agent
response = client.register_agent(
agent_id="my-agent",
project_id="my-app",
data_needs=["configuration", "secrets"]
)
```
## Features
- ✅ **Async & Sync**: Both async and synchronous interfaces
- ✅ **Type Hints**: Full type annotations with Pydantic models
- ✅ **Error Handling**: Comprehensive exception hierarchy
- ✅ **Retry Logic**: Automatic retries with exponential backoff
- ✅ **Rate Limiting**: Built-in rate limit handling
- ✅ **Authentication**: API key authentication support
## API Reference
### Client Initialization
```python
client = ContexAsyncClient(
url="http://localhost:8001", # Contex server URL
api_key="ck_...", # API key for authentication
timeout=30.0, # Request timeout in seconds
max_retries=3, # Maximum number of retries
)
```
### Publishing Data
```python
await client.publish(
project_id="my-app", # Project identifier
data_key="unique-key", # Unique key for this data
data={"any": "json"}, # Data payload
data_format="json", # Format: json, yaml, toml, text
metadata={"tags": ["prod"]}, # Optional metadata
)
```
### Registering Agents
```python
response = await client.register_agent(
agent_id="agent-1", # Unique agent ID
project_id="my-app", # Project ID
data_needs=["config", "secrets"], # Data needs (natural language)
notification_method="redis", # redis or webhook
webhook_url="https://...", # Optional webhook URL
webhook_secret="secret", # Optional webhook secret
last_seen_sequence="0", # Last seen sequence
)
```
### Querying Data
```python
results = await client.query(
project_id="my-app",
query="authentication settings",
max_results=10,
)
for result in results.results:
print(f"{result.data_key}: {result.similarity_score}")
```
### API Key Management
```python
# Create API key
key_response = await client.create_api_key(name="production-key")
print(f"API Key: {key_response.key}") # Store this securely!
# List keys
keys = await client.list_api_keys()
# Revoke key
await client.revoke_api_key(key_id="key-123")
```
### Health Checks
```python
# Comprehensive health
health = await client.health()
# Readiness check
ready = await client.ready()
# Rate limit status
rate_limit = await client.rate_limit_status()
print(f"Remaining: {rate_limit.remaining}/{rate_limit.limit}")
```
## Exception Handling
```python
from contex import (
ContexError,
AuthenticationError,
RateLimitError,
ValidationError,
NotFoundError,
ServerError,
)
try:
await client.publish(...)
except AuthenticationError:
print("Invalid API key")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after} seconds")
except ValidationError as e:
print(f"Validation error: {e}")
except NotFoundError:
print("Resource not found")
except ServerError:
print("Server error")
except ContexError as e:
print(f"Contex error: {e}")
```
## Development
### Setup
```bash
cd sdk/python
pip install -e ".[dev]"
```
### Running Tests
```bash
pytest
```
### Code Formatting
```bash
black contex/
ruff check contex/
mypy contex/
```
## Examples
See the [examples](examples/) directory for more usage examples:
- `basic_usage.py` - Basic publish and query
- `agent_registration.py` - Agent registration and updates
- `webhook_agent.py` - Webhook-based agent
- `error_handling.py` - Error handling patterns
- `batch_operations.py` - Batch publishing
## License
MIT License - see [LICENSE](LICENSE) for details.
## Links
- [Documentation](https://contex.readthedocs.io)
- [GitHub](https://github.com/cahoots-org/contex)
- [PyPI](https://pypi.org/project/contex-python/)
- [Issues](https://github.com/cahoots-org/contex/issues)
| text/markdown | null | Cahoots <admin@cahoots.cc> | null | null | MIT | ai, agents, context, semantic-search, machine-learning | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Develop... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.25.0",
"pydantic>=2.0.0",
"pytest>=7.4.0; extra == \"dev\"",
"pytest-asyncio>=0.21.0; extra == \"dev\"",
"pytest-mock>=3.12.0; extra == \"dev\"",
"black>=23.0.0; extra == \"dev\"",
"mypy>=1.5.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/cahoots-org/contex",
"Repository, https://github.com/cahoots-org/contex",
"Issues, https://github.com/cahoots-org/contex/issues"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T05:20:54.542034 | contex_python-0.2.2.tar.gz | 19,259 | 1b/9a/d6a3306012a1d0c49917e5b5d72a4c0a2292013fe946355a8be31fe128cd/contex_python-0.2.2.tar.gz | source | sdist | null | false | 67c37a326a172f308a3bbe6400f6941b | ebd3747e06abca4a48b76a1e19e9b7feae74976883d9b816f54ec69b397dce65 | 1b9ad6a3306012a1d0c49917e5b5d72a4c0a2292013fe946355a8be31fe128cd | null | [
"LICENSE"
] | 231 |
2.4 | quotes-convert | 1.1.0 | Convert matching double-quotes to single-quotes or vice versa in strings and streams. Inspired by the popular to-single-quotes npm package | # quotes-convert
[](https://github.com/ysskrishna/quotes-convert/blob/main/LICENSE)

[](https://www.python.org/downloads/)
[](https://pypi.org/project/quotes-convert/)
[](https://pepy.tech/projects/quotes-convert)
[](https://ysskrishna.github.io/quotes-convert/)
[](https://ysskrishna.github.io/quotes-convert/playground/)
Convert matching double-quotes to single-quotes or vice versa in strings and streams. Inspired by the popular [to-single-quotes](https://github.com/sindresorhus/to-single-quotes) npm package.
> 🚀 **Try it interactively in your browser!** Test the library with our [Interactive Playground](https://ysskrishna.github.io/quotes-convert/playground/) - no installation required.
## Features
- **Multiple input types**: Convert quotes in strings and streams
- **Proper escaping**: Automatically handles quote escaping and unescaping
- **Memory efficient**: Process large texts with streaming without loading everything into memory
- **Zero dependencies**: Lightweight with no external dependencies
- **Type safe**: Full type hints for excellent IDE support
## Why use this library?
Why not just use `.replace('"', "'")`? Because simply replacing quotes breaks strings that contain escaped quotes.
```python
# The problem with simple replace
original = r'He said "Don\'t do it"'
broken = original.replace('"', "'")
# Result: 'He said 'Don\'t do it'' -> Syntax Error!
# The solution: quotes-convert handles escaping correctly
from quotes_convert import single_quotes
fixed = single_quotes(original)
# Result: 'He said "Don\'t do it"' -> Correctly preserves meaning
```
## Installation
```bash
pip install quotes-convert
```
## Usage Examples
### Basic Usage
```python
from quotes_convert import single_quotes, double_quotes
result = single_quotes('x = "hello"; y = "world"')
print(result) # x = 'hello'; y = 'world'
result = double_quotes('x = "hello"; y = "world"')
print(result) # x = "hello"; y = "world"
```
### Handling Mixed Quotes
```python
from quotes_convert import single_quotes, double_quotes
# Automatically escapes inner quotes
result = single_quotes('"it\'s working"')
print(result) # 'it\'s working'
result = double_quotes("'say \"hi\"'")
print(result) # "say \"hi\""
```
### Processing JSON-like Strings
Useful for normalizing JSON strings or Python dict definitions.
```python
from quotes_convert import double_quotes
json_str = "{'key': 'value', 'nested': {'inner': 'data'}}"
result = double_quotes(json_str) # {"key": "value", "nested": {"inner": "data"}}
```
### Shell Script Processing
```python
from quotes_convert import single_quotes
script = 'echo "Hello $USER"; grep "pattern" file.txt'
result = single_quotes(script) # echo 'Hello $USER'; grep 'pattern' file.txt
```
## Streaming Large Texts
Process large files or streams efficiently without loading the entire content into memory.
```python
from quotes_convert import single_quotes_stream
def line_generator():
yield 'line 1: "hello"\n'
yield 'line 2: "world"\n'
# Process the stream chunk by chunk
for chunk in single_quotes_stream(line_generator()):
print(chunk, end='')
# Output:
# line 1: 'hello'
# line 2: 'world'
```
## API Reference
| Function | Description |
|----------|-------------|
| `single_quotes(text: str) -> str` | Convert matching double-quotes to single-quotes. |
| `double_quotes(text: str) -> str` | Convert matching single-quotes to double-quotes. |
| `single_quotes_stream(stream: Iterable[str]) -> Generator[str, None, None]` | Convert matching double-quotes to single-quotes in a stream, yielding chunks. |
| `double_quotes_stream(stream: Iterable[str]) -> Generator[str, None, None]` | Convert matching single-quotes to double-quotes in a stream, yielding chunks. |
## Acknowledgments
Inspired by [Sindre Sorhus](https://github.com/sindresorhus)'s [to-single-quotes](https://github.com/sindresorhus/to-single-quotes) npm package.
## Changelog
See [CHANGELOG.md](https://github.com/ysskrishna/quotes-convert/blob/main/CHANGELOG.md) for a detailed list of changes and version history.
## Contributing
Contributions are welcome! Please read our [Contributing Guide](https://github.com/ysskrishna/quotes-convert/blob/main/CONTRIBUTING.md) for details on our code of conduct, development setup, and the process for submitting pull requests.
## Support
If you find this library helpful:
- ⭐ Star the repository
- 🐛 Report issues
- 🔀 Submit pull requests
- 💝 [Sponsor on GitHub](https://github.com/sponsors/ysskrishna)
## License
MIT © [Y. Siva Sai Krishna](https://github.com/ysskrishna) - see [LICENSE](https://github.com/ysskrishna/quotes-convert/blob/main/LICENSE) file for details.
---
<p align="left">
<a href="https://github.com/ysskrishna">Author's GitHub</a> •
<a href="https://linkedin.com/in/ysskrishna">Author's LinkedIn</a> •
<a href="https://github.com/ysskrishna/quotes-convert/issues">Report Issues</a> •
<a href="https://pypi.org/project/quotes-convert/">Package on PyPI</a> •
<a href="https://ysskrishna.github.io/quotes-convert/">Package Documentation</a> •
<a href="https://ysskrishna.github.io/quotes-convert/playground/">Package Playground</a>
</p>
| text/markdown | null | "Y. Siva Sai Krishna" <sivasaikrishnassk@gmail.com> | null | null | MIT | convert, developer-tools, double-quotes, escaping, json-formatting, quotes, single-quotes, streaming, string-manipulation, strings, text-processing, type-safe, utilities, zero-dependency | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python ... | [] | null | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/ysskrishna/quotes-convert",
"Repository, https://github.com/ysskrishna/quotes-convert.git",
"Issues, https://github.com/ysskrishna/quotes-convert/issues",
"Changelog, https://github.com/ysskrishna/quotes-convert/blob/main/CHANGELOG.md"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T05:20:45.888654 | quotes_convert-1.1.0-py3-none-any.whl | 6,650 | d5/20/b1599e5b1ca9b8372c906b3e9ca1615bb9a141bc93c9f17d3495ad4f9c8b/quotes_convert-1.1.0-py3-none-any.whl | py3 | bdist_wheel | null | false | 4fe745c7f8caa9f3a8b38a3bbf74310c | 191652631f8d3e2e59d4348cdd2558a9b3dd141d2a67af490223fbdfad853d39 | d520b1599e5b1ca9b8372c906b3e9ca1615bb9a141bc93c9f17d3495ad4f9c8b | null | [
"LICENSE"
] | 244 |
2.4 | jupyterlab-codex-sidebar | 0.1.4 | JupyterLab Codex sidebar with server extension | # JupyterLab Codex Sidebar
## English
### Quick Guide (Discovery Install, Read First)
Follow this order for the easiest setup.
1) Prerequisites
- Install the `Jupytext` JupyterLab extension first.
- Ensure `codex` CLI is installed and logged in.
- Check in terminal: `codex exec --help` (or `codex exec`) works.
2) Install from JupyterLab Discovery (Extension Manager)
1. Open JupyterLab.
2. Open `Extensions` (puzzle icon) in the left sidebar.
3. Search `jupyterlab-codex-sidebar`.
4. Click `Install`.
3) Restart and verify
1. In JupyterLab menu, run `File > Shut Down`.
2. Start JupyterLab again.
3. Confirm `Codex` appears in the right sidebar.
4) Create Jupytext pairs (`.ipynb` <-> `.py`)
- This extension requires a Jupytext paired workflow.
### A. Start from `.ipynb`
1. Open the notebook.
2. In the `Jupytext` menu, choose `Pair Notebook with ...`.
3. Select `.py`.
4. Confirm a same-name `.py` file exists.
### B. Start from `.py` (reverse direction)
1. Open the `.py` file as a notebook (`Open With > Notebook`, etc.).
2. In the `Jupytext` menu, choose `Pair Notebook with ...`.
3. Select `.ipynb`.
4. Confirm a same-name `.ipynb` file exists.
Note: For Codex sidebar usage, the paired `.ipynb` and `.py` should both exist with the same base name.
A JupyterLab 4 sidebar extension that connects to Codex CLI (`codex exec --json`) and provides a chat-style assistant UI.
The extension has two parts:
- Frontend: JupyterLab prebuilt extension (React)
- Backend: Jupyter Server extension (WebSocket at `/codex/ws`)
The backend runs `codex` as a local subprocess per request, streams JSONL events, and renders them in the UI.
## Features
- Threaded sessions by notebook path
- Model / Reasoning Effort / Sandbox selection in the UI
- Optional inclusion of active cell text
- Designed for a Jupytext paired workflow (`.ipynb` <-> `.py`)
- execution is disabled if the paired `.py` file is missing
- Conversation/session logs: `~/.jupyter/codex-sessions/`
- Optional usage snapshot: best-effort scan of recent `~/.codex/sessions/`
## Requirements
- Python 3.9+
- JupyterLab 4 and Jupyter Server
- Codex CLI installed and authenticated (`codex exec` works in terminal)
- Node.js + `jlpm` + `jupyter labextension` for source build
## Install / Run
### Quick start (recommended for local development)
There are two development workflows:
- `install_dev.sh` : install/link only (does not start JupyterLab)
- `run_dev.sh` : install first, then start JupyterLab
Install only:
```bash
bash install_dev.sh
```
Install + run:
```bash
bash run_dev.sh --ServerApp.port=8888
```
`run_dev.sh` internally runs `install_dev.sh` first.
### Manual local install
1. Build frontend
```bash
jlpm install
jlpm build
```
2. Install Python package
```bash
python -m pip install -e .
```
3. Enable server extension
```bash
PREFIX="${CONDA_PREFIX:-$(python -c 'import sys; print(sys.prefix)')}"
mkdir -p "$PREFIX/etc/jupyter/jupyter_server_config.d"
cp jupyter-config/jupyter_server_config.d/jupyterlab_codex.json \
"$PREFIX/etc/jupyter/jupyter_server_config.d/jupyterlab_codex.json"
jupyter server extension enable jupyterlab_codex --sys-prefix || true
jupyter server extension list | sed -n '1,120p' || true
```
4. Link labextension in editable mode
```bash
PREFIX="${CONDA_PREFIX:-$(python -c 'import sys; print(sys.prefix)')}"
mkdir -p "$PREFIX/share/jupyter/labextensions"
ln -sfn "$(pwd)/jupyterlab_codex/labextension" "$PREFIX/share/jupyter/labextensions/jupyterlab-codex-sidebar"
jupyter labextension list
```
5. Start JupyterLab
```bash
jupyter lab
```
## Usage
1. Open a notebook in JupyterLab.
2. The `Codex` panel appears in the right sidebar.
3. Send messages and the server runs `codex exec --json ...` and streams output.
4. Settings controls:
- Auto-save before send
- Include active cell
- Include active cell output
- Model / Reasoning Effort / Permission
## Configuration
Server-side defaults can also be set via environment variables:
- `JUPYTERLAB_CODEX_MODEL`: default model when unset in UI/command
- `JUPYTERLAB_CODEX_SANDBOX`: default sandbox (default: `workspace-write`)
- `JUPYTERLAB_CODEX_SESSION_LOGGING`: `0`/`1` to disable/enable local session logging (default: `1`)
- `JUPYTERLAB_CODEX_SESSION_RETENTION_DAYS`: retention period for local session logs in days (default: `30`; set `0` to disable pruning)
- `JUPYTERLAB_CODEX_SESSION_MAX_MESSAGE_CHARS`: max length per stored message, used for local logs (default: `12000`)
Notes:
- Session logs are stored under `~/.jupyter/codex-sessions/` as JSONL+meta JSON.
- Before writing each message, obvious secret-like values are redacted (e.g., API keys, bearer tokens, JWT-like strings).
- You can disable logs entirely by setting `JUPYTERLAB_CODEX_SESSION_LOGGING=0`.
Selected UI values are passed as CLI args.
## Paths
- WebSocket endpoint: `/codex/ws`
- Session logs: `~/.jupyter/codex-sessions/*.jsonl` and `*.meta.json`
- Usage snapshot: best-effort scan of recent `~/.codex/sessions/**/*.jsonl`
## Troubleshooting
- Sidebar missing:
- Check `jupyter labextension list` includes `jupyterlab-codex-sidebar`
- In editable install, confirm symlink is created
- WebSocket stays `disconnected`:
- Check `jupyter server extension list` shows `jupyterlab_codex` enabled
- Inspect server logs for errors
- `codex` command not found:
- Verify `codex exec --help` works in terminal
- Recheck PATH/virtualenv and restart JupyterLab
## Development notes
- `jlpm watch` enables auto rebuild
- Main files:
- UI: `src/panel.tsx`
- Server: `jupyterlab_codex/handlers.py`, `jupyterlab_codex/runner.py`
## Architecture
```
[UI (JupyterLab Sidebar)]
|
| WebSocket: /codex/ws
v
[CodexWSHandler (Jupyter Server)]
|
v
[CodexRunner]
- subprocess: codex exec --json --color never --skip-git-repo-check ...
|
v
[UI rendering]
```
## Korean
# JupyterLab Codex 사이드바
### 빠른 가이드 (Discovery 설치, 먼저 읽기)
아래 순서대로 하면 가장 쉽게 설정할 수 있습니다.
1) 사전 준비
- `Jupytext` JupyterLab extension이 먼저 설치되어 있어야 합니다.
- 터미널에서 `codex` CLI가 설치되어 있고, 로그인된 상태여야 합니다.
- 확인: `codex exec --help` (또는 `codex exec`) 가 정상 동작
2) JupyterLab Discovery(Extension Manager)에서 설치
1. JupyterLab 실행
2. 왼쪽 사이드바 `Extensions`(퍼즐 아이콘) 열기
3. `jupyterlab-codex-sidebar` 검색
4. `Install` 클릭
3) 재시작 후 확인
1. JupyterLab 상단 메뉴에서 `File > Shut Down` 실행
2. JupyterLab 재실행
3. 우측 사이드바에 `Codex` 패널이 보이는지 확인
4) Jupytext 페어링 만들기 (`.ipynb` <-> `.py`)
- 이 확장은 Jupytext 페어링 워크플로우를 전제로 합니다.
### A. `.ipynb` 파일에서 시작할 때
1. 노트북(`.ipynb`) 열기
2. 상단 `Jupytext` 메뉴에서 `Pair Notebook with ...` 선택
3. `.py` 포맷으로 페어링
4. 같은 이름의 `.py` 파일이 생성되었는지 확인
### B. `.py` 파일에서 시작할 때 (반대 방향)
1. `.py` 파일을 Notebook 형태로 열기(`Open With > Notebook` 등)
2. 상단 `Jupytext` 메뉴에서 `Pair Notebook with ...` 선택
3. `.ipynb` 포맷으로 페어링
4. 같은 이름의 `.ipynb` 파일이 생성되었는지 확인
참고: Codex 사이드바 사용 시에는 같은 이름의 `.ipynb`와 `.py` 페어가 모두 있어야 합니다.
JupyterLab 4 우측 사이드바에서 Codex CLI(`codex exec --json`)를 채팅 UI로 사용할 수 있게 해주는 확장입니다.
구성은 아래 2개로 나뉩니다.
- 프론트엔드: JupyterLab prebuilt 확장(React)
- 백엔드: Jupyter Server 확장(WebSocket: `/codex/ws`)
백엔드는 요청마다 로컬의 `codex` 실행 파일을 서브프로세스로 호출하고(JSONL 이벤트 스트리밍), UI가 이를 받아 채팅처럼 렌더링합니다.
## 주요 기능
- 노트북 경로 기준으로 스레드(세션) 분리
- 모델 / Reasoning Effort / 샌드박스 권한을 UI에서 선택
- 활성 셀 텍스트를 프롬프트에 포함할지 선택
- `.ipynb` ↔ `.py`(Jupytext paired) 워크플로우를 전제로 동작(페어링된 `.py`가 없으면 실행이 비활성화됨)
- 세션 로그 저장: `~/.jupyter/codex-sessions/`
- (가능한 경우) Codex 사용량 스냅샷 표시: `~/.codex/sessions/` 를 best-effort로 스캔
## 요구 사항
- Python 3.9+
- JupyterLab 4 / Jupyter Server
- Codex CLI 설치 및 인증 완료(터미널에서 `codex exec`가 동작해야 함)
- (소스에서 빌드 시) Node.js + `jlpm` + `jupyter labextension` 명령 사용 가능
## 설치/실행
### 빠른 실행(권장)
개발용 스크립트가 분리되어 있습니다.
- `install_dev.sh` : 설치/링크만 수행 (`jupyter lab` 실행 없음)
- `run_dev.sh` : 설치 후 JupyterLab 실행
설치만:
```bash
bash install_dev.sh
```
설치 + 실행:
```bash
bash run_dev.sh --ServerApp.port=8888
```
`run_dev.sh`는 내부적으로 `install_dev.sh`를 먼저 실행한 뒤 JupyterLab을 시작합니다.
스크립트가 하는 일(요약):
- JS 의존성 설치(`jlpm install`) 및 빌드(`jlpm build`)
- 파이썬 패키지 editable 설치(`python -m pip install -e .`)
- 서버 확장 활성화용 config 스니펫 설치 + enable
- labextension을 현재 파이썬 환경의 `share/jupyter/labextensions/`에 symlink
- `jupyter lab` 실행
추가로 JupyterLab 옵션을 넘기고 싶다면, 스크립트 뒤에 그대로 붙이면 됩니다.
```bash
bash run_dev.sh --ServerApp.port=8888
```
### 수동 설치(개발/로컬)
1) 프론트엔드 빌드
```bash
jlpm install
jlpm build
```
2) 파이썬 패키지 설치
```bash
python -m pip install -e .
```
3) 서버 확장 활성화(최초 1회)
```bash
PREFIX="${CONDA_PREFIX:-$(python -c 'import sys; print(sys.prefix)')}"
mkdir -p "$PREFIX/etc/jupyter/jupyter_server_config.d"
cp jupyter-config/jupyter_server_config.d/jupyterlab_codex.json \
"$PREFIX/etc/jupyter/jupyter_server_config.d/jupyterlab_codex.json"
# 필요 시(또는 확인용)
jupyter server extension enable jupyterlab_codex --sys-prefix || true
jupyter server extension list | sed -n '1,120p' || true
```
4) labextension 링크(Editable 설치에서 필요)
```bash
PREFIX="${CONDA_PREFIX:-$(python -c 'import sys; print(sys.prefix)')}"
mkdir -p "$PREFIX/share/jupyter/labextensions"
ln -sfn "$(pwd)/jupyterlab_codex/labextension" "$PREFIX/share/jupyter/labextensions/jupyterlab-codex-sidebar"
jupyter labextension list
```
5) JupyterLab 실행
```bash
jupyter lab
```
## 사용 방법
1) JupyterLab을 실행한 뒤 노트북을 열면, 우측 사이드바에 `Codex` 패널이 나타납니다.
2) 메시지를 입력하고 전송하면 서버가 `codex exec --json ...` 를 실행하고 결과를 스트리밍합니다.
3) Settings에서 아래 옵션을 조정할 수 있습니다.
- Auto-save before send: 전송 전에 노트북을 자동 저장
- Include active cell: 활성 셀 텍스트를 프롬프트에 포함
- Include active cell output: 활성 셀 output(텍스트 위주)을 프롬프트에 포함
- Model / Reasoning Effort / Permission(샌드박스)
## 설정(옵션)
서버 측 기본값은 환경 변수로도 지정할 수 있습니다.
- `JUPYTERLAB_CODEX_MODEL`: 모델을 명시하지 않았을 때 기본 모델로 사용
- `JUPYTERLAB_CODEX_SANDBOX`: 샌드박스 기본값(기본: `workspace-write`)
- `JUPYTERLAB_CODEX_SESSION_LOGGING`: `0`/`1`로 세션 로그 저장 비활성/활성화 (기본: `1`)
- `JUPYTERLAB_CODEX_SESSION_RETENTION_DAYS`: 로컬 세션 로그 보관 기간(일 단위, 기본: `30`; `0`으로 두면 보존 정리 비활성)
- `JUPYTERLAB_CODEX_SESSION_MAX_MESSAGE_CHARS`: 저장되는 메시지 최대 길이(기본: `12000`)
안내:
- 세션 로그는 `~/.jupyter/codex-sessions/*.jsonl` 및 `*.meta.json`에 저장됩니다.
- 로그 저장 전 메시지 내 민감해 보이는 값(토큰/키/암호류)을 마스킹합니다.
- 로그가 불필요하다면 `JUPYTERLAB_CODEX_SESSION_LOGGING=0`으로 끌 수 있습니다.
참고: UI에서 모델/권한을 명시적으로 선택하면, 해당 값이 요청에 포함되어 CLI 인자로 전달됩니다.
## 데이터/경로
- WebSocket 엔드포인트: `/codex/ws`
- 세션 로그: `~/.jupyter/codex-sessions/*.jsonl` 및 `*.meta.json`
- 사용량 스냅샷(best-effort): `~/.codex/sessions/**/*.jsonl` 의 최근 로그를 일부 스캔
## 트러블슈팅
- 사이드바가 보이지 않음:
- `jupyter labextension list` 에서 `jupyterlab-codex-sidebar`가 잡히는지 확인
- editable 설치라면 “labextension 링크” 단계가 빠졌는지 확인
- WebSocket이 `disconnected`로만 표시됨:
- `jupyter server extension list` 에서 `jupyterlab_codex`가 enabled인지 확인
- 서버 로그에 에러가 없는지 확인
- `codex` 실행 파일을 찾지 못함:
- 터미널에서 `codex exec --help` 가 동작하는지 확인
- PATH/가상환경을 정리한 뒤 JupyterLab 서버를 재시작
## 개발 메모
- `jlpm watch`: 프론트엔드 자동 빌드/갱신
- 주요 코드 위치:
- UI: `src/panel.tsx`
- 서버: `jupyterlab_codex/handlers.py`, `jupyterlab_codex/runner.py`
## 아키텍처(요약 플로우)
```
[UI (JupyterLab Sidebar)]
|
| WebSocket: /codex/ws
v
[CodexWSHandler (Jupyter Server)]
|
v
[CodexRunner]
- subprocess: codex exec --json --color never --skip-git-repo-check ...
|
v
[UI 출력 렌더링]
```
| text/markdown | ILHO AHN | null | null | null | BSD-3-Clause | null | [
"Framework :: Jupyter",
"Framework :: Jupyter :: JupyterLab",
"Framework :: Jupyter :: JupyterLab :: Extensions",
"Framework :: Jupyter :: JupyterLab :: Extensions :: Prebuilt",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"jupyterlab<5,>=4",
"jupyter_server<3,>=2"
] | [] | [] | [] | [
"Homepage, https://github.com/oy-ilho/jupyterlab-codex",
"Repository, https://github.com/oy-ilho/jupyterlab-codex",
"Issues, https://github.com/oy-ilho/jupyterlab-codex/issues"
] | twine/6.2.0 CPython/3.13.9 | 2026-02-21T05:20:25.082881 | jupyterlab_codex_sidebar-0.1.4.tar.gz | 1,093,931 | 65/9b/f7cfe27ddc59875ddad71f6a4993162c9785bd5ec8e146b654420d271fd0/jupyterlab_codex_sidebar-0.1.4.tar.gz | source | sdist | null | false | b522fbf212d6e39a63a937c11414da7f | 173bac0422bb38e2d05f93373394e746e0ea113daa916c7a46c58eac0f6a753d | 659bf7cfe27ddc59875ddad71f6a4993162c9785bd5ec8e146b654420d271fd0 | null | [] | 241 |
2.4 | pytest-homeassistant-custom-component | 0.13.316 | Experimental package to automatically extract test plugins for Home Assistant custom components | # pytest-homeassistant-custom-component

Package to automatically extract testing plugins from Home Assistant for custom component testing.
The goal is to provide the same functionality as the tests in home-assistant/core.
pytest-homeassistant-custom-component is updated daily according to the latest homeassistant release including beta.
## Usage:
* All pytest fixtures can be used as normal, like `hass`
* For helpers:
* home-assistant/core native test: `from tests.common import MockConfigEntry`
* custom component test: `from pytest_homeassistant_custom_component.common import MockConfigEntry`
* If your integration is inside a `custom_components` folder, a `custom_components/__init__.py` file or changes to `sys.path` may be required.
* `enable_custom_integrations` fixture is required (versions >=2021.6.0b0)
* Some fixtures, e.g. `recorder_mock`, need to be initialized before `enable_custom_integrations`. See https://github.com/MatthewFlamm/pytest-homeassistant-custom-component/issues/132.
* pytest-asyncio might now require `asyncio_mode = auto` config, see #129.
* If using `load_fixture`, the files need to be in a `fixtures` folder colocated with the tests. For example, a test in `test_sensor.py` can load data from `some_data.json` using `load_fixture` from this structure:
```
tests/
fixtures/
some_data.json
test_sensor.py
```
* When using syrupy snapshots, add a `snapshot` fixture to conftest.py to make sure the snapshots are loaded from snapshot folder colocated with the tests.
```py
from pytest_homeassistant_custom_component.syrupy import HomeAssistantSnapshotExtension
from syrupy.assertion import SnapshotAssertion
@pytest.fixture
def snapshot(snapshot: SnapshotAssertion) -> SnapshotAssertion:
"""Return snapshot assertion fixture with the Home Assistant extension."""
return snapshot.use_extension(HomeAssistantSnapshotExtension)
```
## Examples:
* See [list of custom components](https://github.com/MatthewFlamm/pytest-homeassistant-custom-component/network/dependents) as examples that use this package.
* Also see tests for `simple_integration` in this repository.
* Use [cookiecutter-homeassistant-custom-component](https://github.com/oncleben31/cookiecutter-homeassistant-custom-component) to create a custom component with tests by using [cookiecutter](https://github.com/cookiecutter/cookiecutter).
* The [github-custom-component-tutorial](https://github.com/boralyl/github-custom-component-tutorial) explaining in details how to create a custom componenent with a test suite using this package.
## More Info
This repository is set up to be nearly fully automatic.
* Version of home-assistant/core is given in `ha_version`, `pytest_homeassistant_custom_component.const`, and in the README above.
* This package is generated against published releases of homeassistant and updated daily.
* PRs should not include changes to the `pytest_homeassistant_custom_component` files. CI testing will automatically generate the new files.
### Version Strategy
* When changes in extraction are required, there will be a change in the minor version.
* A change in the patch version indicates that it was an automatic update with a homeassistant version.
* This enables tracking back to which versions of pytest-homeassistant-custom-component can be used for
extracting testing utilities from which version of homeassistant.
This package was inspired by [pytest-homeassistant](https://github.com/boralyl/pytest-homeassistant) by @boralyl, but is intended to more closely and automatically track the home-assistant/core library.
| text/markdown | Matthew Flamm | matthewflamm0@gmail.com | null | null | MIT license | null | [
"Development Status :: 3 - Alpha",
"Framework :: Pytest",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Testing"
] | [] | https://github.com/MatthewFlamm/pytest-homeassistant-custom-component | null | >=3.13 | [] | [] | [] | [
"sqlalchemy",
"coverage==7.10.6",
"freezegun==1.5.2",
"license-expression==30.4.3",
"mock-open==1.4.0",
"pydantic==2.12.2",
"pylint-per-file-ignores==1.4.0",
"pipdeptree==2.26.1",
"pytest-asyncio==1.3.0",
"pytest-aiohttp==1.1.0",
"pytest-cov==7.0.0",
"pytest-freezer==0.4.9",
"pytest-github-a... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:20:15.092476 | pytest_homeassistant_custom_component-0.13.316.tar.gz | 65,294 | 59/cc/294b96ae5b90b276a9e0f4cfa02045d1d72fb484fe331ef33c0e4fc3b51a/pytest_homeassistant_custom_component-0.13.316.tar.gz | source | sdist | null | false | 90b36443556e22888ab6987e207f699a | 4457dc5ecb6bfdf39241e008307f2cea34a0036178cbee2e52de3e4290448a16 | 59cc294b96ae5b90b276a9e0f4cfa02045d1d72fb484fe331ef33c0e4fc3b51a | null | [
"LICENSE",
"LICENSE_HA_CORE.md"
] | 1,083 |
2.4 | typespecs | 2.0.0 | Data specifications by type hints | # typespecs
[](https://pypi.org/project/typespecs/)
[](https://pypi.org/project/typespecs/)
[](https://pepy.tech/project/typespecs)
[](https://doi.org/10.5281/zenodo.17681195)
[](https://github.com/astropenguin/typespecs/actions)
Data specifications by type hints
## Installation
```bash
pip install typespecs
```
## Basic Usage
```python
from dataclasses import dataclass
from typespecs import ITSELF, Spec, from_annotated
from typing import Annotated as Ann
@dataclass
class Weather:
temp: Ann[list[float], Spec(category="data", name="Temperature", units="K")]
wind: Ann[list[float], Spec(category="data", name="Wind speed", units="m/s")]
loc: Ann[str, Spec(category="metadata", name="Observed location")]
weather = Weather([273.15, 280.15], [5.0, 10.0], "Tokyo")
specs = from_annotated(weather)
print(specs)
```
```
category data name type units
temp data [273.15, 280.15] Temperature list[float] K
wind data [5.0, 10.0] Wind speed list[float] m/s
loc metadata Tokyo Observed location <class 'str'> <NA>
```
## Advanced Usage
### Handling Sub-annotations
```python
Float = Ann[float, Spec(dtype=ITSELF)]
@dataclass
class Weather:
temp: Ann[list[Float], Spec(category="data", name="Temperature", units="K")]
wind: Ann[list[Float], Spec(category="data", name="Wind speed", units="m/s")]
loc: Ann[str, Spec(category="metadata", name="Observed location")]
weather = Weather([273.15, 280.15], [5.0, 10.0], "Tokyo")
specs = from_annotated(weather)
print(specs)
```
```
category data dtype name type units
temp data [273.15, 280.15] <class 'float'> Temperature list[float] K
wind data [5.0, 10.0] <class 'float'> Wind speed list[float] m/s
loc metadata Tokyo <NA> Observed location <class 'str'> <NA>
```
### Handling Missing Values
```python
specs = from_annotated(weather, default=None)
print(specs)
```
```
category data dtype name type units
temp data [273.15, 280.15] <class 'float'> Temperature list[float] K
wind data [5.0, 10.0] <class 'float'> Wind speed list[float] m/s
loc metadata Tokyo None Observed location <class 'str'> None
```
### Handling Full Specification
```python
specs = from_annotated(weather, merge=False)
print(specs)
```
```
category data dtype name type units
temp data [273.15, 280.15] <NA> Temperature list[float] K
temp/0 <NA> <NA> <class 'float'> <NA> <class 'float'> <NA>
wind data [5.0, 10.0] <NA> Wind speed list[float] m/s
wind/0 <NA> <NA> <class 'float'> <NA> <class 'float'> <NA>
loc metadata Tokyo <NA> Observed location <class 'str'> <NA>
```
| text/markdown | null | Akio Taniguchi <a-taniguchi@mail.kitami-it.ac.jp> | null | null | MIT License Copyright (c) 2025-2026 Akio Taniguchi Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | annotation, dataclass, dataframe, namedtuple, python, specification, typeddict, typing | [
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | <3.15,>=3.10 | [] | [] | [] | [
"pandas<3,>=2",
"typing-extensions<5,>=4"
] | [] | [] | [] | [
"homepage, https://astropenguin.github.io/typespecs",
"repository, https://github.com/astropenguin/typespecs"
] | uv/0.9.30 {"installer":{"name":"uv","version":"0.9.30","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"12","id":"bookworm","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T05:20:04.665981 | typespecs-2.0.0-py3-none-any.whl | 7,372 | b9/b2/c03edf8b68e7ab8321ae7a682b6fe31718ef07c80dda34b008b30c1d4e46/typespecs-2.0.0-py3-none-any.whl | py3 | bdist_wheel | null | false | eaa1637670cb37707162eaa81ced40da | 67691c9f4fb6ee751744aa510816138b1a2b5223dd29f5f968c495ce5f19db86 | b9b2c03edf8b68e7ab8321ae7a682b6fe31718ef07c80dda34b008b30c1d4e46 | null | [
"LICENSE"
] | 234 |
2.4 | corrkit | 0.6.1 | Sync email threads from IMAP to Markdown, draft replies, push routing intelligence to Cloudflare | # Correspondence Kit
> **Alpha software.** Expect breaking changes between minor versions. See [VERSIONS.md](VERSIONS.md) for migration notes.
Consolidate conversations from multiple email accounts into a single flat directory of Markdown files. Draft replies with AI assistance. Push routing intelligence to Cloudflare.
Corrkit syncs threads from any IMAP provider (Gmail, Protonmail Bridge, self-hosted) into `correspondence/conversations/` — one file per thread, regardless of source. A thread that arrives via both Gmail and Protonmail merges into one file. Labels, accounts, and contacts are metadata, not directory structure. Slack and social media sources are planned.
## Install
Requires Python 3.11+ and [uv](https://docs.astral.sh/uv/).
**Quick start (general user):**
```sh
uvx corrkit init --user you@gmail.com
```
This creates `~/Documents/correspondence` with directory structure, `accounts.toml`,
and empty config files. Edit `accounts.toml` with credentials, then run `corrkit sync`.
**Developer setup (from repo checkout):**
```sh
cp accounts.toml.example accounts.toml # configure your email accounts
uv sync
```
### Account configuration
Define email accounts in `accounts.toml` with provider presets:
```toml
[accounts.personal]
provider = "gmail" # gmail | protonmail-bridge | imap
user = "you@gmail.com"
password_cmd = "pass email/personal" # or: password = "inline-secret"
labels = ["correspondence"]
default = true
[accounts.proton]
provider = "protonmail-bridge"
user = "you@proton.me"
password_cmd = "pass email/proton"
labels = ["private"]
[accounts.selfhosted]
provider = "imap"
imap_host = "mail.example.com"
smtp_host = "mail.example.com"
user = "user@example.com"
password_cmd = "pass email/selfhosted"
labels = ["important"]
```
Provider presets fill in IMAP/SMTP connection defaults:
| Field | `gmail` | `protonmail-bridge` | `imap` (generic) |
|---|---|---|---|
| imap_host | imap.gmail.com | 127.0.0.1 | (required) |
| imap_port | 993 | 1143 | 993 |
| imap_starttls | false | true | false |
| smtp_host | smtp.gmail.com | 127.0.0.1 | (required) |
| smtp_port | 465 | 1025 | 465 |
| drafts_folder | [Gmail]/Drafts | Drafts | Drafts |
Any preset value can be overridden per-account. Credential resolution: `password` (inline)
or `password_cmd` (shell command, e.g. `pass email/personal`).
**Backward compat**: If no `accounts.toml` exists, falls back to `.env` GMAIL_* vars.
### Legacy `.env` configuration
| Variable | Required | Description |
| ---------------------------- | -------- | ---------------------------------------------------- |
| `GMAIL_USER_EMAIL` | yes | Your Gmail address |
| `GMAIL_APP_PASSWORD` | yes | [App password](https://myaccount.google.com/apppasswords) |
| `GMAIL_SYNC_LABELS` | yes | Comma-separated Gmail labels to sync |
| `GMAIL_SYNC_DAYS` | no | How far back to sync (default: 3650) |
| `CLOUDFLARE_ACCOUNT_ID` | no | For routing intelligence push |
| `CLOUDFLARE_API_TOKEN` | no | For routing intelligence push |
| `CLOUDFLARE_D1_DATABASE_ID` | no | For routing intelligence push |
## Usage
All commands are available through the `corrkit` CLI:
```sh
corrkit --help # Show all commands
corrkit init --user EMAIL # Initialize a new data directory
corrkit sync # Sync all accounts
corrkit sync --account personal # Sync one account
corrkit sync --full # Full re-sync (ignore saved state)
corrkit sync-gmail # Alias for sync (backward compat)
corrkit list-folders [ACCOUNT] # List IMAP folders for an account
corrkit push-draft correspondence/drafts/FILE.md # Save a draft via IMAP
corrkit push-draft correspondence/drafts/FILE.md --send # Send via SMTP
corrkit add-label LABEL --account NAME # Add a label to an account's sync config
corrkit contact-add NAME --email EMAIL # Add a contact with context docs
corrkit for add NAME --label LABEL # Add a collaborator
corrkit for sync [NAME] # Push/pull shared submodules
corrkit for status # Check for pending changes
corrkit for remove NAME # Remove a collaborator
corrkit for rename OLD NEW # Rename a collaborator directory
corrkit for reset [NAME] # Pull, regenerate templates, commit & push
corrkit by find-unanswered # Find threads awaiting a reply
corrkit by validate-draft FILE # Validate draft markdown files
corrkit watch # Poll IMAP and sync on an interval
corrkit watch --interval 60 # Override poll interval (seconds)
corrkit spaces # List configured spaces
corrkit --space work sync # Sync a specific space
corrkit audit-docs # Audit instruction files for staleness
corrkit help # Show command reference
```
Run with `uv run corrkit <subcommand>` if the package isn't installed globally.
### Spaces
Manage multiple correspondence directories (personal, work, etc.) with named spaces:
```sh
# Init creates a space automatically
corrkit init --user you@gmail.com # registers "default" space
corrkit init --user work@company.com --data-dir ~/work/correspondence --space work
# List configured spaces
corrkit spaces
# Use a specific space for any command
corrkit --space work sync
corrkit --space personal for status
```
Spaces are stored in `~/.config/corrkit/config.toml` (Linux), `~/Library/Application Support/corrkit/config.toml` (macOS), or `%APPDATA%/corrkit/config.toml` (Windows). The first space added becomes the default. With one space configured, `--space` is optional.
Synced threads are written to `correspondence/conversations/[slug].md` (flat, one file per thread). Labels and accounts are metadata inside each file. A `manifest.toml` index is generated after each sync.
## Development
```sh
uv run pytest # Run tests
uv run ruff check . # Lint
uv run ruff format . # Format
uv run ty check # Type check
uv run poe precommit # Run ty + ruff + tests
```
## Unified conversation directory
All synced threads live in one flat directory:
```
correspondence/
conversations/ # one file per thread, all sources merged
project-update.md # immutable slug filename
lunch-plans.md # mtime = last message date (ls -t sorts by activity)
quarterly-review.md
contacts/ # per-contact context for drafting
alex/
AGENTS.md # relationship, tone, topics, notes
CLAUDE.md -> AGENTS.md
drafts/ # outgoing messages
manifest.toml # thread index (generated by sync)
```
**No subdirectories for accounts or labels.** A conversation with the same person may arrive via
Gmail, Protonmail, or both — it merges into one file. Source metadata is tracked inside each file
(`**Labels**`, `**Accounts**`) and in `manifest.toml`.
**Immutable filenames.** Each thread gets a `[slug].md` name derived from the subject on first write.
The filename never changes, even as new messages arrive. Thread identity is tracked by `**Thread ID**`
metadata inside the file.
**manifest.toml** indexes every thread by subject, labels, accounts, contacts, and last-updated date.
Agents read the manifest for discovery, then go straight to the file for content.
**Extensible to new sources.** The flat model means adding Slack or social media sync doesn't change
the directory layout — new messages merge into the same directory with their source tracked in metadata.
## Sandboxing
Most AI email tools (OpenClaw, etc.) require OAuth access to your entire account. Once authorized, the agent can read every message, every contact, every thread — and you're trusting the service not to overreach.
Correspondence-kit inverts this. You control what any agent or collaborator can see:
1. **You label threads in your email client.** Only threads you explicitly label get synced locally.
2. **Labels route to scoped views.** Each collaborator/agent gets a submodule containing only the threads labeled for them — nothing else.
3. **Credentials never leave your machine.** `accounts.toml` is gitignored. Agents draft replies in markdown; only you can push to your email.
An agent added with `corrkit for add assistant --label for-assistant` can only see threads you've tagged `for-assistant`. It can't see your other conversations, your contacts, or other collaborators' repos. If the agent is compromised, the blast radius is limited to the threads you chose to share.
This works across multiple email accounts — Gmail, Protonmail, self-hosted — each with its own labels and routing rules, all funneling through the same scoped collaborator model.
## Contacts
Per-contact directories give Claude context when drafting emails — relationship history, tone preferences, recurring topics.
### Adding a contact
```sh
corrkit contact-add alex --email alex@example.com --email alex@work.com --label correspondence --account personal
```
This creates `correspondence/contacts/alex/` with an AGENTS.md template (+ CLAUDE.md symlink) and updates `contacts.toml`.
### Contact context
Edit `correspondence/contacts/{name}/AGENTS.md` with:
- **Relationship**: How you know this person, shared history
- **Tone**: Communication style overrides (defaults to voice.md)
- **Topics**: Recurring subjects, current projects
- **Notes**: Freeform context — preferences, pending items, important dates
### contacts.toml
Maps contacts to email addresses and conversation labels (for lookup, not sync routing):
```toml
[alex]
emails = ["alex@example.com", "alex@work.com"]
labels = ["correspondence"]
account = "personal"
```
Copy `contacts.toml.example` to `contacts.toml` to get started.
## Collaborators
Share specific email threads with people or AI agents via scoped GitHub repos.
### Adding a collaborator
```sh
# Human collaborator (invited via GitHub)
corrkit for add alex-gh --label for-alex --name "Alex"
# AI agent (uses a PAT instead of GitHub invite)
corrkit for add assistant-bot --label for-assistant --pat
# Bind all labels to one account
corrkit for add alex-gh --label for-alex --account personal
# Per-label account scoping (proton-dev account, INBOX folder)
# Use account:label syntax in collaborators.toml directly
```
This creates a private GitHub repo (`{owner}/to-{gh-user}`), initializes it with instructions, and adds it as a submodule under `for/{gh-user}/`. Collaborators use `uvx corrkit by ...` for helper commands.
### Daily workflow
```sh
# 1. Sync emails -- shared labels route to for/{gh-user}/conversations/
corrkit sync
# 2. Push synced threads to collaborator repos & pull their drafts
corrkit for sync
# 3. Check what's pending without pushing
corrkit for status
# 4. Review a collaborator's draft and push it as an email draft
corrkit push-draft for/alex-gh/drafts/2026-02-19-reply.md
```
### Unattended sync with `corrkit watch`
Run as a daemon to poll IMAP, sync threads, and push to shared repos automatically:
```sh
# Interactive — polls every 5 minutes (default), Ctrl-C to stop
corrkit watch
# Custom interval
corrkit watch --interval 60
```
Configure in `accounts.toml`:
```toml
[watch]
poll_interval = 300 # seconds between polls (default: 300)
notify = true # desktop alerts on new messages (default: false)
```
#### Running as a system service
**Linux (systemd):**
```sh
cp services/corrkit-watch.service ~/.config/systemd/user/
# Edit WorkingDirectory in the unit file to match your setup
systemctl --user enable --now corrkit-watch
journalctl --user -u corrkit-watch -f # view logs
```
**macOS (launchd):**
```sh
cp services/com.corrkit.watch.plist ~/Library/LaunchAgents/
# Edit WorkingDirectory in the plist to match your setup
launchctl load ~/Library/LaunchAgents/com.corrkit.watch.plist
tail -f /tmp/corrkit-watch.log # view logs
```
### What collaborators can do
- Read conversations labeled for them
- Draft replies in `for/{gh-user}/drafts/` following the format in AGENTS.md
- Run `uvx corrkit by find-unanswered` and `uvx corrkit by validate-draft` in their repo
- Push changes to their shared repo
### What only you can do
- Sync new emails (`corrkit sync`)
- Push synced threads to collaborator repos (`corrkit for sync`)
- Send emails (`corrkit push-draft --send`)
- Change draft Status to `sent`
### Removing a collaborator
```sh
corrkit for remove alex-gh
corrkit for remove alex-gh --delete-repo # also delete the GitHub repo
```
## Designed for humans and agents
Corrkit is built around files, CLI commands, and git — interfaces that work equally well for humans
and AI agents. No GUIs, no OAuth popups, no interactive prompts.
### Why this works
- **Everything is files.** Threads are Markdown. Config is TOML. Drafts are Markdown. Humans read
them in any editor; agents read and write them natively.
- **CLI is the interface.** Every operation is a single `corrkit` command. Scriptable, composable,
works the same whether a human or agent is at the keyboard.
- **Zero-install for collaborators.** `uvx corrkit by find-unanswered` and `uvx corrkit by validate-draft`
work without cloning the main repo or setting up a dev environment.
- **Self-documenting repos.** Each shared repo ships with `AGENTS.md` (full instructions),
`CLAUDE.md` (symlink for Claude Code), `voice.md`, and a `README.md`. A new collaborator —
human or agent — can start contributing immediately.
- **Templates stay current.** `corrkit for reset` regenerates all template files in shared repos
when the tool evolves. No manual sync of instructions across collaborators.
### Owner workflow
The owner can work directly or with an AI agent (Claude Code, Codex, etc.) that has full context of
both the codebase and the correspondence. In a single session:
1. Develop the tool — write code, run tests, commit
2. Sync emails — `corrkit sync`
3. Manage collaborators — add, reset templates, push synced threads
4. Draft replies — reading threads for context, writing drafts matching the voice guidelines
5. Review collaborator drafts — validate, approve, push to email
Humans and agents use the same commands. There's no separate "agent mode" — the CLI is the
universal interface.
### Collaborator workflow
Each collaborator — human or agent — gets a scoped git repo with:
```
for/{gh-user}/
AGENTS.md # Full instructions: formats, commands, status flow
CLAUDE.md # Symlink for Claude Code auto-discovery
README.md # Quick-start guide
voice.md # Writing style guidelines
conversations/ # Synced threads (read-only for the collaborator)
drafts/ # Where the collaborator writes replies
```
The collaborator reads conversations, drafts replies following the documented format, validates with
`uvx corrkit validate-draft`, and pushes. The owner reviews and sends.
## Cloudflare architecture
Python handles the heavy lifting locally. Distilled intelligence is pushed to Cloudflare storage
for use by a lightweight TypeScript Worker that handles email routing.
```
Gmail/Protonmail
↓
Python (local, uv)
- sync threads → markdown
- extract intelligence (tags, contact metadata, routing rules)
- push to Cloudflare
↓
Cloudflare D1 / KV
- contact importance scores
- thread tags / inferred topics
- routing rules
↓
Cloudflare Worker (TypeScript)
- email routing decisions using intelligence from Python
```
Full conversation threads stay local. Cloudflare only receives the minimal distilled signal
needed for routing.
## MCP alternative
Instead of pre-syncing to markdown files, Claude can access Gmail live via an MCP server during
a session. Options:
- **Pipedream** — hosted MCP with Gmail, Calendar, Contacts (note: data passes through Pipedream)
- **Local Python MCP server** — run a Gmail MCP server locally for fully private live access (future)
Current approach (file sync) is preferred for privacy and offline use. MCP is worth revisiting
for real-time workflows.
## Future work
- **Slack sync**: Pull conversations from Slack channels/DMs into the flat conversations/ directory
- **Social media sync**: Pull DMs and threads from social platforms into conversations/
- **Cloudflare routing**: TypeScript Worker consuming D1/KV data pushed from Python
- **Local MCP server**: Live email access during Claude sessions without Pipedream
- **Multi-user**: Per-user credential flow when shared with another developer
## AI agent instructions
Project instructions live in `AGENTS.md` (symlinked as `CLAUDE.md`). Personal overrides go in `CLAUDE.local.md` / `AGENTS.local.md` (gitignored).
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"google-auth-oauthlib>=1.0.0",
"httpx>=0.28.0",
"imapclient>=3.1.0",
"msgspec>=0.20.0",
"platformdirs>=4.0.0",
"python-dotenv>=1.0.0",
"tomli-w>=1.0.0",
"poethepoet==0.39.0; extra == \"dev\"",
"pytest>=8.0; extra == \"dev\"",
"ruff>=0.9.0; extra == \"dev\"",
"ty>=0.0.1a6; extra == \"dev\""
] | [] | [] | [] | [
"Repository, https://github.com/btakita/corrkit"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"CachyOS Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T05:18:44.675984 | corrkit-0.6.1-py3-none-any.whl | 54,549 | 45/8a/f8c2ef8824b76531eb7de28ca02144db21f1702b0b00fec221080d7b5ea9/corrkit-0.6.1-py3-none-any.whl | py3 | bdist_wheel | null | false | 0b7b33da68d8b08543917f0bb262ca55 | 65b504202f1d12789f945a1640c2865cb90a7c9c9310dd523ff3b0842c188e8e | 458af8c2ef8824b76531eb7de28ca02144db21f1702b0b00fec221080d7b5ea9 | Apache-2.0 | [
"LICENSE"
] | 226 |
2.4 | bashers | 0.8.6 | Installable cli helpers | # Bashers
CLI command helpers (Rust). Install: `cargo install bashers`. Both `bashers` and `bs` go to `~/.cargo/bin`; ensure it’s in PATH before pyenv/shims.
**Install from PyPI:** `pip install bashers` (or `pip install --upgrade bashers` to get the latest). PyPI wheels are built for Python 3.11, 3.12, and 3.13; if you’re on 3.13 and see an old version, a 3.13 wheel may not have been published yet for that release—use the next release or `cargo install bashers` / install from repo.
**Install from repo:** `./scripts/install.sh` (or `curl -sSf https://raw.githubusercontent.com/Sung96kim/bashers/main/scripts/install.sh | sh`). Use `--no-path` to skip profile changes.
## Usage
```bash
bashers update # update deps (fuzzy match optional)
bashers setup # install deps (--frozen, --rm, --dry-run)
bashers show # list packages
bashers git sync # checkout default, pull, fetch (--current = current branch only); bs sync works
bashers kube kmg <pattern> # pod describe + Image lines
bashers kube track <pattern> # follow logs (--err-only, --simple)
bashers docker build [ -f <path> ] # build from Dockerfile (default: ./Dockerfile; -t tag, --no-cache, -c context); bs build works
bashers watch -n 2 -- <cmd> # run command repeatedly, highlight changes (-n interval, --no-diff)
bashers self update # upgrade bashers
bashers version
```
| Command | Description |
|---------|-------------|
| **update** | Update deps (uv/poetry), fuzzy match |
| **setup** | Install project deps |
| **show** | List installed packages |
| **git** | `sync` (default branch or --current) |
| **kube** | `kmg`, `track` |
| **docker** | `build` (optional Dockerfile path [default: ./Dockerfile], tag, no-cache, context) |
| **watch** | Run command on an interval, diff highlight (green = changed) |
| **self** | `update` |
| **version** | Print version |
`bashers <cmd> --help` for options.
## Features
Fuzzy package matching, fzf when multiple matches, uv & poetry, color output, dry-run.
## Development
**Build:** `cargo build` / `cargo build --release`
**Test:** `cargo test` (unit: `cargo test --lib`; one test: `cargo test test_fuzzy_match_exact`)
**Quality:** `cargo fmt` · `cargo clippy` · `cargo fmt --check` · `cargo clippy -- -D warnings`
**Run:** `cargo run --quiet -- <cmd>` or `./target/debug/bashers <cmd>`. Optional: `NO_SPINNER=1` to disable spinner.
**Coverage:** `cargo install cargo-tarpaulin --locked` then `cargo tarpaulin --out Xml --output-dir coverage --timeout 120`
**Python wheel (build and test locally):** Install [maturin](https://pypi.org/project/maturin/) (`pip install maturin`). From the repo root: `maturin build --release --features pyo3`. Wheels are written to `target/wheels/`. Install with a matching Python (e.g. 3.13): `python3 -m pip install --force-reinstall target/wheels/bashers-*-cp313-*.whl`. Run `bashers --help` or `bashers version` to confirm.
**New command:** Add module under `src/commands/` (or `src/commands/<group>/`), add variant in `src/cli.rs`, wire in `src/lib.rs`, then `cargo build`. When adding or changing any CLI command, update the Usage section and the Command table above.
## Releasing
Releases are automated with **release-plz** on push to main. Use [Conventional Commits](https://www.conventionalcommits.org/): `feat:` (minor), `fix:` (patch), `feat!:` or `BREAKING CHANGE:` (major). Push to main → version/changelog PR, merge → publish to crates.io and GitHub Release. The **tag and GitHub Release are created in the workflow run triggered by the merge** (not the run that opened the PR). Set `CARGO_REGISTRY_TOKEN` if publishing to crates.io. First time: run `cargo publish` once so release-plz knows the current version.
Manual: bump version in `Cargo.toml`, tag `vX.Y.Z`, push tag; workflow builds and creates the GitHub Release.
| text/markdown; charset=UTF-8; variant=GFM | null | null | null | null | MIT | null | [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython"
] | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:17:56.981663 | bashers-0.8.6.tar.gz | 67,300 | 6f/d4/10e52b8b5c6e3e7cd34fa0808cd8bf4e796f9ed2809e4b7a9d5c34bcbcfc/bashers-0.8.6.tar.gz | source | sdist | null | false | e286da34b013df63a9b0b38d253571d2 | c80f137e3bb7d984f2809103497f2fcaa5a1f466eb2c41ef774717b555d90a9c | 6fd410e52b8b5c6e3e7cd34fa0808cd8bf4e796f9ed2809e4b7a9d5c34bcbcfc | null | [
"LICENSE"
] | 418 |
2.4 | rebake | 0.0.1 | A spiritual successor to cruft for managing cookiecutter projects | # rebake
A spiritual successor to [cruft](https://github.com/cruft/cruft) for managing [cookiecutter](https://github.com/cookiecutter/cookiecutter) projects.
rebake improves on cruft in two key areas:
1. **Better conflict UX** — uses `git apply -3` to produce inline conflict markers instead of `.rej` files
2. **New variable detection** — prompts for variables added to the template since the project was last updated
## Requirements
- Python 3.12+
- [uv](https://docs.astral.sh/uv/)
- Git
## Installation
```bash
uv tool install rebake
```
Or add it to a project:
```bash
uv add rebake
```
## Usage
### `rebake check`
Check whether the project is up-to-date with its template.
```bash
rebake check [PROJECT_DIR]
```
Exit codes:
- `0` — up-to-date
- `1` — outdated
- `2` — error (e.g. `.cruft.json` not found)
### `rebake update`
Apply the latest template changes to the project.
```bash
rebake update [PROJECT_DIR]
```
rebake will:
1. Abort if there are uncommitted changes (commit or stash first)
2. Detect new variables added to the template and prompt for their values
3. Generate a diff between the old and new rendered templates
4. Apply the diff with `git apply -3` — conflicts appear as inline markers
5. Update `.cruft.json` with the new commit hash and any newly added variables
## Migrating from cruft
rebake reads `.cruft.json` as-is. No migration needed — just replace `cruft` with `rebake` in your commands.
```bash
# before
cruft check
cruft update
# after
rebake check
rebake update
```
## `.cruft.json` format
```json
{
"template": "https://github.com/owner/template",
"commit": "abc123...",
"checkout": "main",
"context": {
"cookiecutter": {
"project_name": "my-project",
"author": "Jane Doe"
}
},
"skip": ["go.sum", "*.lock"]
}
```
## Development
```bash
git clone https://github.com/kitagry/rebake
cd rebake
uv sync
uv run pytest
```
| text/markdown | null | Ryo Kitagawa <kitadrum50@gmail.com> | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"cookiecutter>=2.6.0",
"gitpython>=3.1.44",
"rich>=13.9.4",
"typer>=0.15.1"
] | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:17:54.538417 | rebake-0.0.1.tar.gz | 34,596 | 6c/4a/ac2a30d53cf24b310396b782e887f81eb049a8b36143779fa537cd4740fd/rebake-0.0.1.tar.gz | source | sdist | null | false | c5a157f035aaa6738b62744579d696bb | 61cf56427fcc82ba5ef521b4d6958c0d41cc5cc5d69ded2b6a9d8b1d40fbaad4 | 6c4aac2a30d53cf24b310396b782e887f81eb049a8b36143779fa537cd4740fd | null | [] | 230 |
2.4 | Undefined-bot | 2.14.0 | A high-performance, highly scalable QQ group and private chat robot based on a self-developed architecture. | <table border="0">
<tr>
<td width="70%" valign="top">
<div align="center">
<h1>Undefined</h1>
<em>A high-performance, highly scalable QQ group and private chat robot based on a self-developed architecture.</em>
<br/><br/>
<a href="https://www.python.org/"><img src="https://img.shields.io/badge/Python-3.11--3.13-blue.svg" alt="Python"></a>
<a href="https://docs.astral.sh/uv/"><img src="https://img.shields.io/badge/uv-auto%20python%20manager-6a5acd.svg" alt="uv"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License"></a>
<a href="https://deepwiki.com/69gg/Undefined"><img src="https://deepwiki.com/badge.svg" alt="Ask DeepWiki"></a>
<br/><br/>
<p>大鹏一日同风起,扶摇直上九万里。</p>
</div>
<h3>项目简介</h3>
<p>
<strong>Undefined</strong> 是一个功能强大的 QQ 机器人平台,采用全新的 <strong>自研 Skills</strong> 架构。基于现代 Python 异步技术栈构建,它不仅提供基础的对话能力,更通过内置的多个智能 Agent 实现代码分析、网络搜索、娱乐互动等多模态能力。
</p>
</td>
<td width="30%">
<img src="https://raw.githubusercontent.com/69gg/Undefined/main/img/head.jpg" width="100%" alt="Undefined" />
</td>
</tr>
</table>
### _与 [NagaAgent](https://github.com/Xxiii8322766509/NagaAgent) 进行联动!_
---
<details>
<summary><b>目录</b></summary>
- [立即体验](#立即体验)
- [核心特性](#核心特性)
- [系统架构概览](#系统架构概览)
- [架构图(Mermaid)](#架构图mermaid)
- [延伸阅读](#延伸阅读)
- [安装与部署](#安装与部署)
- [pip/uv tool 部署(推荐用于直接使用)](#pipuv-tool-部署推荐用于直接使用)
- [完整日志(排查用)](#完整日志排查用)
- [pip/uv tool 部署的自定义方式](#pipuv-tool-部署的自定义方式)
- [源码部署(开发/使用)](#源码部署开发使用)
- [1. 克隆项目](#1-克隆项目)
- [2. 安装依赖](#2-安装依赖)
- [3. 配置环境](#3-配置环境)
- [源码部署的自定义指南](#源码部署的自定义指南)
- [4. 启动运行](#4-启动运行)
- [5. 跨平台与资源路径(重要)](#5-跨平台与资源路径重要)
- [配置说明](#配置说明)
- [配置热更新说明](#配置热更新说明)
- [会话白名单示例](#会话白名单示例)
- [MCP 配置](#mcp-配置)
- [Agent 私有 MCP(可选)](#agent-私有-mcp可选)
- [使用说明](#使用说明)
- [开始使用](#开始使用)
- [Agent 能力展示](#agent-能力展示)
- [管理员命令](#管理员命令)
- [扩展与开发](#扩展与开发)
- [目录结构](#目录结构)
- [开发指南](#开发指南)
- [开发自检](#开发自检)
- [文档与延伸阅读](#文档与延伸阅读)
- [风险提示与免责声明](#风险提示与免责声明)
- [致谢与友链](#致谢与友链)
- [NagaAgent](#nagaagent)
- [开源协议](#开源协议)
</details>
---
## 立即体验
[点击添加官方实例QQ](https://qm.qq.com/q/cvjJoNysGA)
## 核心特性
- **Skills 架构**:全新设计的技能系统,将基础工具(Tools)与智能代理(Agents)分层管理,支持自动发现与注册。
- **Skills 热重载**:自动扫描 `skills/` 目录,检测到变更后即时重载工具与 Agent,无需重启服务。
- **配置热更新 + WebUI**:使用 `config.toml` 配置,支持热更新;提供 WebUI 在线编辑与校验。
- **多模型池**:支持配置多个 AI 模型,可轮询、随机选择或用户指定;支持多模型并发比较,选择最佳结果继续对话。详见 [多模型功能文档](docs/multi-model.md)。
- **会话白名单(群/私聊)**:只需配置 `access.allowed_group_ids` / `access.allowed_private_ids` 两个列表,即可把机器人"锁"在指定群与指定私聊里;避免被拉进陌生群误触发、也避免工具/定时任务把消息误发到不该去的地方(默认留空不限制)。
- **并行工具执行**:无论是主 AI 还是子 Agent,均支持 `asyncio` 并发工具调用,大幅提升多任务处理速度(如同时读取多个文件或搜索多个关键词)。
- **智能 Agent 矩阵**:内置多个专业 Agent,分工协作处理复杂任务。
- **callable.json 共享机制**:通过简单的配置文件(`callable.json`)即可让 Agent 互相调用、将 `skills/tools/` 或 `skills/toolsets/` 下的工具按白名单暴露给 Agent,支持细粒度访问控制,实现复杂的多 Agent 协作场景。
- **Agent 自我介绍自动生成**:启动时按 Agent 代码/配置 hash 生成 `intro.generated.md`(第一人称、结构化),与 `intro.md` 合并后作为描述;减少手动维护,保持能力说明与实现同步,有助于精准调度。
- **请求上下文管理**:基于 Python `contextvars` 的统一请求上下文系统,自动 UUID 追踪,零竞态条件,完全的并发隔离。
- **定时任务系统**:支持 Crontab 语法的强大定时任务系统,可自动执行各种操作(如定时提醒、定时搜索)。
- **MCP 协议支持**:支持通过 MCP (Model Context Protocol) 连接外部工具和数据源,扩展 AI 能力。
- **Agent 私有 MCP**:可为单个 agent 提供独立 MCP 配置,按调用即时加载并释放,工具仅对该 agent 可见。
- **Anthropic Skills**:支持 Anthropic Agent Skills(SKILL.md 格式),遵循 agentskills.io 开放标准,提供领域知识注入能力。
- **Bilibili 视频提取**:自动检测消息中的 B 站视频链接/BV 号/小程序分享,下载 1080p 视频并通过 QQ 发送;同时提供 AI 工具调用入口。
- **思维链支持**:支持开启思维链,提升复杂逻辑推理能力。
- **高并发架构**:基于 `asyncio` 全异步设计,支持多队列消息处理与工具并发执行,轻松应对高并发场景。
- **异步安全 I/O**:统一 IO 层通过线程池 + 跨平台文件锁(Linux/macOS `flock`,Windows `msvcrt`)+ 原子写入(`os.replace`)保证并发写入不损坏、且不阻塞主事件循环。
- **安全防护**:内置独立的安全模型,实时检测注入攻击与恶意内容。
- **OneBot 协议**:完美兼容 OneBot V11 协议,支持多种前端实现(如 NapCat)。
## 系统架构概览
Undefined 采用 **8层异步架构设计**,以下是详细的系统架构图(包含所有核心组件、6个Agent、7类工具集、存储系统与数据流):
### 架构图(Mermaid)
```mermaid
graph TB
%% ==================== 外部实体层 ====================
User([用户 User])
Admin([管理员 Admin])
OneBotServer["OneBot 协议端<br/>(NapCat / Lagrange.Core)"]
LLM_API["大模型 API 服务商<br/>(OpenAI / Claude / DeepSeek / etc.)"]
%% ==================== 核心入口层 ====================
subgraph EntryPoint["核心入口层 (src/Undefined/)"]
Main["main.py<br/>启动入口"]
ConfigLoader["ConfigManager<br/>配置管理器<br/>[config/manager.py + loader.py]"]
ConfigModels["配置模型<br/>[config/models.py]<br/>ChatModelConfig<br/>VisionModelConfig<br/>SecurityModelConfig<br/>AgentModelConfig"]
OneBotClient["OneBotClient<br/>WebSocket 客户端<br/>[onebot.py]"]
Context["RequestContext<br/>请求上下文<br/>[context.py]"]
WebUI["webui.py<br/>配置控制台<br/>[src/Undefined/webui.py]"]
end
%% ==================== 消息处理层 ====================
subgraph MessageLayer["消息处理层"]
MessageHandler["MessageHandler<br/>消息处理器<br/>[handlers.py]"]
SecurityService["SecurityService<br/>安全服务<br/>• 注入检测 • 速率限制<br/>[security.py]"]
CommandDispatcher["CommandDispatcher<br/>命令分发器<br/>• /help /stats /lsadmin<br/>• /addadmin /rmadmin<br/>[services/command.py]"]
AICoordinator["AICoordinator<br/>AI 协调器<br/>• Prompt 构建 • 队列管理<br/>• 回复执行<br/>[ai_coordinator.py]"]
QueueManager["QueueManager<br/>队列管理器<br/>[queue_manager.py]"]
end
%% ==================== AI 核心能力层 ====================
subgraph AILayer["AI 核心能力层 (src/Undefined/ai/)"]
AIClient["AIClient<br/>AI 客户端主入口<br/>[client.py]<br/>• 技能热重载 • MCP 初始化<br/>• Agent 介绍生成"]
PromptBuilder["PromptBuilder<br/>提示词构建器<br/>[prompts.py]"]
ModelRequester["ModelRequester<br/>模型请求器<br/>[llm.py]<br/>• OpenAI SDK • 工具清理<br/>• Thinking 提取"]
ToolManager["ToolManager<br/>工具管理器<br/>[tooling.py]<br/>• 工具执行 • Agent 工具合并<br/>• MCP 工具注入"]
MultimodalAnalyzer["MultimodalAnalyzer<br/>多模态分析器<br/>[multimodal.py]"]
SummaryService["SummaryService<br/>总结服务<br/>[summaries.py]"]
TokenCounter["TokenCounter<br/>Token 统计<br/>[tokens.py]"]
end
%% ==================== 存储与上下文层 ====================
subgraph StorageLayer["存储与上下文层"]
HistoryManager["MessageHistoryManager<br/>消息历史管理<br/>[utils/history.py]<br/>• 懒加载 • 10000条限制"]
MemoryStorage["MemoryStorage<br/>长期记忆存储<br/>[memory.py]<br/>• 500条上限 • 自动去重"]
EndSummaryStorage["EndSummaryStorage<br/>短期总结存储<br/>[end_summary_storage.py]"]
FAQStorage["FAQStorage<br/>FAQ 存储<br/>[faq.py]"]
ScheduledTaskStorage["ScheduledTaskStorage<br/>定时任务存储<br/>[scheduled_task_storage.py]"]
TokenUsageStorage["TokenUsageStorage<br/>Token 使用统计<br/>[token_usage_storage.py]<br/>• 自动归档 • gzip 压缩"]
end
%% ==================== 技能系统层 ====================
subgraph SkillsLayer["Skills 技能系统 (src/Undefined/skills/)"]
ToolRegistry["ToolRegistry<br/>工具注册表<br/>[registry.py]<br/>• 延迟加载 • 热重载支持<br/>• 执行统计"]
AgentRegistry["AgentRegistry<br/>Agent 注册表<br/>[registry.py]<br/>• Agent 发现 • 工具聚合"]
subgraph AtomicTools["基础工具"]
T_End["end<br/>结束对话"]
T_Python["python_interpreter<br/>Python 解释器"]
T_Time["get_current_time<br/>获取当前时间"]
T_BilibiliVideo["bilibili_video<br/>B站视频下载发送"]
end
subgraph Toolsets["工具集 (7大类)"]
TS_Group["group.*<br/>• get_member_list<br/>• get_member_info<br/>• get_honor_info<br/>• get_files"]
TS_Messages["messages.*<br/>• send_message<br/>• get_recent_messages<br/>• get_forward_msg"]
TS_Memory["memory.*<br/>• add / delete<br/>• list / update"]
TS_Notices["notices.*<br/>• list / get / stats"]
TS_Render["render.*<br/>• render_html<br/>• render_latex<br/>• render_markdown"]
TS_Scheduler["scheduler.*<br/>• create_schedule_task<br/>• delete_schedule_task<br/>• list_schedule_tasks"]
end
subgraph IntelligentAgents["智能体 Agents (6个)"]
A_Info["info_agent<br/>信息查询助手<br/>(17个工具)<br/>• weather_query<br/>• *hot 热搜<br/>• bilibili_*<br/>• whois"]
A_Web["web_agent<br/>网络搜索助手<br/>• MCP Playwright<br/>• web_search<br/>• crawl_webpage"]
A_File["file_analysis_agent<br/>文件分析助手<br/>(14个工具)<br/>• extract_* (PDF/Word/Excel/PPT)<br/>• analyze_code<br/>• analyze_multimodal"]
A_Naga["naga_code_analysis_agent<br/>NagaAgent 代码分析<br/>(7个工具)<br/>• read_file / glob<br/>• search_file_content"]
A_Ent["entertainment_agent<br/>娱乐助手<br/>(9个工具)<br/>• ai_draw_one<br/>• horoscope<br/>• video_random_recommend"]
A_Code["code_delivery_agent<br/>代码交付助手<br/>(13个工具)<br/>• Docker 容器隔离<br/>• Git 仓库克隆<br/>• 代码编写验证<br/>• 打包上传"]
end
MCPRegistry["MCPToolRegistry<br/>MCP 工具注册表<br/>[mcp/registry.py]"]
end
%% ==================== IO 工具层 ====================
subgraph IOLayer["异步 IO 层 (utils/io.py)"]
IOUtils["IO 工具<br/>• write_json • read_json<br/>• append_line<br/>• 文件锁 (flock/msvcrt) + 原子写入"]
end
%% ==================== 数据持久化层 ====================
subgraph Persistence["数据持久化层 (data/)"]
Dir_History["history/<br/>• group_{id}.json<br/>• private_{id}.json"]
Dir_FAQ["faq/{group_id}/<br/>• YYYYMMDD-NNN.json"]
Dir_TokenUsage["token_usage_archives/<br/>• *.jsonl.gz"]
File_Config["config.toml<br/>config.local.json"]
File_Memory["memory.json<br/>(长期记忆)"]
File_EndSummary["end_summaries.json<br/>(短期总结)"]
File_ScheduledTasks["scheduled_tasks.json<br/>(定时任务)"]
end
%% ==================== 连接线 ====================
%% 外部实体到核心入口
User -->|"消息"| OneBotServer
Admin -->|"指令"| OneBotServer
OneBotServer <-->|"WebSocket<br/>Event / API"| OneBotClient
%% 核心入口层内部
Main -->|"初始化"| ConfigLoader
Main -->|"创建"| OneBotClient
Main -->|"创建"| AIClient
ConfigLoader --> ConfigModels
ConfigLoader -->|"读取"| File_Config
WebUI -->|"读写"| File_Config
OneBotClient -->|"消息事件"| MessageHandler
%% 消息处理层
MessageHandler -->|"1. 安全检测"| SecurityService
SecurityService -.->|"API 调用"| LLM_API
MessageHandler -->|"2. 指令?"| CommandDispatcher
CommandDispatcher -->|"执行结果"| OneBotClient
MessageHandler -->|"3. 自动回复"| AICoordinator
AICoordinator -->|"创建上下文"| Context
AICoordinator -->|"入队"| QueueManager
QueueManager -->|"1Hz 发车<br/>异步执行"| AIClient
%% AI 核心能力层
AIClient --> PromptBuilder
AIClient --> ModelRequester
AIClient --> ToolManager
AIClient --> MultimodalAnalyzer
AIClient --> SummaryService
AIClient --> TokenCounter
ModelRequester <-->|"API 请求"| LLM_API
%% 存储层连接
PromptBuilder -->|"注入"| HistoryManager
PromptBuilder -->|"注入"| MemoryStorage
PromptBuilder -->|"注入"| EndSummaryStorage
MessageHandler -->|"保存消息"| HistoryManager
AICoordinator -->|"记录统计"| TokenUsageStorage
CommandDispatcher -->|"FAQ 操作"| FAQStorage
%% 技能系统层
ToolManager -->|"获取工具"| ToolRegistry
ToolManager -->|"获取 Agent"| AgentRegistry
ToolManager -->|"获取 MCP"| MCPRegistry
ToolRegistry --> AtomicTools
ToolRegistry --> Toolsets
AgentRegistry --> IntelligentAgents
%% IO 层连接
HistoryManager -->|"异步读写"| IOUtils
MemoryStorage -->|"异步读写"| IOUtils
TokenUsageStorage -->|"异步读写<br/>自动归档"| IOUtils
FAQStorage -->|"异步读写"| IOUtils
ScheduledTaskStorage -->|"异步读写"| IOUtils
IOUtils --> Dir_History
IOUtils --> File_Memory
IOUtils --> File_EndSummary
IOUtils --> Dir_TokenUsage
IOUtils --> Dir_FAQ
IOUtils --> File_ScheduledTasks
%% Agent 递归调用
IntelligentAgents -->|"递归调用"| AIClient
%% 最终输出
AIClient -->|"Reply Text"| OneBotClient
OneBotClient -->|"发送"| OneBotServer
%% 样式定义
classDef external fill:#ffebee,stroke:#c62828,stroke-width:2px
classDef core fill:#e3f2fd,stroke:#1565c0,stroke-width:2px
classDef message fill:#fff3e0,stroke:#ef6c00,stroke-width:2px
classDef ai fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px
classDef skills fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px
classDef storage fill:#e0f7fa,stroke:#00838f,stroke-width:2px
classDef io fill:#fce4ec,stroke:#c2185b,stroke-width:1px
classDef persistence fill:#f5f5f5,stroke:#616161,stroke-width:1px
class User,Admin,OneBotServer,LLM_API external
class Main,ConfigLoader,ConfigModels,OneBotClient,Context,WebUI core
class MessageHandler,SecurityService,CommandDispatcher,AICoordinator,QueueManager message
class AIClient,PromptBuilder,ModelRequester,ToolManager,MultimodalAnalyzer,SummaryService,TokenCounter ai
class ToolRegistry,AgentRegistry,MCPRegistry,AtomicTools,Toolsets,IntelligentAgents skills
class HistoryManager,MemoryStorage,EndSummaryStorage,FAQStorage,ScheduledTaskStorage,TokenUsageStorage storage
class IOUtils io
class Dir_History,Dir_FAQ,Dir_TokenUsage,File_Config,File_Memory,File_EndSummary,File_ScheduledTasks persistence
```
### 延伸阅读
> 详细介绍请见[ARCHITECTURE.md](ARCHITECTURE.md)
---
## 安装与部署
提供 pip/uv tool 安装与源码部署两种方式:前者适合直接使用;后者适合深度自定义与二次开发。
> Python 版本要求:`3.11`~`3.13`(包含)。
>
> 若使用 `uv`,通常不需要你手动限制系统 Python 版本;`uv` 会根据项目约束自动选择/下载兼容解释器。
### pip/uv tool 部署(推荐用于直接使用)
适合只想“安装后直接跑”的场景,`Undefined`/`Undefined-webui` 命令会作为可执行入口安装到你的环境中。
```bash
# 方式 1:pip
pip install -U Undefined-bot
python -m playwright install
# 方式 2:uv tool(建议使用该方式进行隔离安装)
# 安装uv(若未安装)
pip install uv
# 可选:显式指定兼容解释器(不指定时 uv 也会自动选择)
# uv python install 3.12
uv tool install Undefined-bot
uv tool run --from Undefined-bot playwright install
```
安装完成后,在任意目录准备 `config.toml` 并启动:
```bash
# 启动方式(二选一)
#
# 1) 直接启动机器人(无 WebUI)
Undefined
#
# 2) 启动 WebUI(在浏览器里编辑配置,并在 WebUI 内启停机器人)
Undefined-webui
```
> 重要:`Undefined` 与 `Undefined-webui` **二选一即可**,不要同时运行两个进程;否则会出现“重复登录/重复收发消息”等问题。
>
> - 选择 `Undefined`:直接在终端运行机器人,修改 `config.toml` 后重启生效(或依赖热重载能力)。
> - 选择 `Undefined-webui`:启动后访问 WebUI(默认 `http://127.0.0.1:8787`,密码默认 `changeme`;**首次启动必须修改默认密码,默认密码不可登录**;可在 `config.toml` 的 `[webui]` 中修改),在 WebUI 中在线编辑/校验配置,并通过 WebUI 启动/停止机器人进程。
> `Undefined-webui` 会在检测到当前目录缺少 `config.toml` 时,自动从 `config.toml.example` 生成一份,便于直接在 WebUI 中修改。
> 提示:资源文件已随包发布,支持在非项目根目录启动;如需自定义内容,请参考下方说明。
#### 完整日志(排查用)
如果你希望保留完整安装/运行日志,可直接重定向到文件:
```bash
# pip 安装日志
python -m pip install -U Undefined-bot 2>&1 | tee install.log
# 运行日志(CLI)
Undefined 2>&1 | tee undefined.log
# 运行日志(WebUI)
Undefined-webui 2>&1 | tee undefined-webui.log
```
#### pip/uv tool 部署的自定义方式
wheel 会自带 `res/**` 与 `img/**`。为了便于自定义,程序读取资源文件时采用“可覆盖”策略:
1. 优先加载运行目录下的同名文件(例如 `./res/prompts/...`)
2. 若不存在,再使用安装包自带的资源文件
因此你无需改动 site-packages,直接在运行目录放置覆盖文件即可,例如:
```bash
mkdir -p res/prompts
# 然后把你想改的提示词放到对应路径(文件名与目录层级保持一致)
```
如果你希望直接修改“默认提示词/默认文案”(而不是每个运行目录做覆盖),推荐使用下面的“源码部署”,在仓库里修改 `res/` 后运行;不建议直接修改已安装环境的 `site-packages/res`(升级会被覆盖)。
如果你不知道安装包内默认提示词文件在哪,可以用下面方式打印路径(用于复制一份出来改):
```bash
python -c "from Undefined.utils.resources import resolve_resource_path; print(resolve_resource_path('res/prompts/undefined.xml'))"
```
资源加载自检(确保 wheel 资源可用):
```bash
python -c "from Undefined.utils.resources import read_text_resource; print(len(read_text_resource('res/prompts/undefined.xml')))"
```
### 源码部署(开发/使用)
#### 1. 克隆项目
由于项目中使用了 `NagaAgent` 作为子模块,请使用以下命令克隆项目:
```bash
git clone --recursive https://github.com/69gg/Undefined.git
cd Undefined
```
如果已经克隆了项目但没有初始化子模块:
```bash
git submodule update --init --recursive
```
#### 2. 安装依赖
推荐使用 `uv` 进行现代化的 Python 依赖管理(速度极快):
```bash
# 安装 uv (如果尚未安装)
pip install uv
# 可选:预装一个兼容解释器(推荐 3.12)
# uv python install 3.12
# 同步依赖
# uv 会根据 pyproject.toml 自动处理 3.11~3.13 的解释器选择
uv sync
```
同时需要安装 Playwright 浏览器内核(用于网页浏览功能):
```bash
uv run playwright install
```
#### 3. 配置环境
复制示例配置文件 `config.toml.example` 为 `config.toml` 并填写你的配置信息。
```bash
cp config.toml.example config.toml
```
#### 源码部署的自定义指南
- 自定义提示词/预置文案:直接修改仓库根目录的 `res/`(例如 `res/prompts/`)。
- 自定义图片资源:修改 `img/` 下的对应文件(例如 `img/xlwy.jpg`)。
- 若你希望“运行目录覆盖优先”:在启动目录放置 `./res/...`,会优先于默认资源生效(便于一套安装,多套运行配置)。
#### 4. 启动运行
启动方式(二选一):
```bash
# 1) 直接启动机器人(无 WebUI)
uv run Undefined
# 2) 启动 WebUI(在浏览器里编辑配置,并在 WebUI 内启停机器人)
uv run Undefined-webui
```
> 重要:两种方式 **二选一即可**,不要同时运行。若你选择 `Undefined-webui`,请在 WebUI 中管理机器人进程的启停。
#### 5. 跨平台与资源路径(重要)
- 资源读取:运行时会优先从运行目录加载同名 `res/...` / `img/...`(便于覆盖),若不存在再使用安装包自带资源;并提供仓库结构兜底查找,因此从任意目录启动也能正常加载提示词与资源文案。
- 资源覆盖:如需覆盖默认提示词/文案,可在当前工作目录放置同名的 `res/...` 文件;或在源码目录直接修改 `res/`。
- 并发写入:运行时会为 JSON/日志类文件使用“锁文件 + 原子替换”写入策略,Windows/Linux/macOS 行为一致(会生成 `*.lock` 文件)。
### 配置说明
在 `config.toml` 文件中配置以下核心参数(示例见 `config.toml.example`):
- **基础配置**:`[core]` 与 `[onebot]`
- `process_every_message`:是否处理每条群消息(默认开启);关闭后仅处理 `@机器人`、私聊、拍一拍(群消息仍会写入历史)
- `process_private_message`:是否处理私聊消息;关闭后仅记录私聊历史,不触发 AI 回复
- `process_poke_message`:是否响应拍一拍事件
- `context_recent_messages_limit`:注入给模型的最近历史消息条数上限(`0-200`,`0` 表示不注入)
- **会话白名单(推荐)**:`[access]`
- `allowed_group_ids`:允许处理/发送消息的群号列表
- `allowed_private_ids`:允许处理/发送消息的私聊 QQ 列表
- `superadmin_bypass_allowlist`:超级管理员是否可在私聊中绕过 `allowed_private_ids`(仅影响私聊收发;群聊仍严格按 `allowed_group_ids`)
- 规则:只要 `allowed_group_ids` 或 `allowed_private_ids` 任一非空,就会启用限制模式;未在白名单内的群/私聊消息将被直接忽略,且所有消息发送也会被拦截(包括工具调用与定时任务)。
- **模型配置**:`[models.chat]` / `[models.vision]` / `[models.agent]` / `[models.security]`
- `api_url`:OpenAI 兼容 **base URL**(如 `https://api.openai.com/v1` / `http://127.0.0.1:8000/v1`)
- `models.security.enabled`:是否启用安全模型检测(默认开启)
- DeepSeek Thinking + Tool Calls:若使用 `deepseek-reasoner` 或 `deepseek-chat` + `thinking={"type":"enabled"}` 且启用了工具调用,建议启用 `deepseek_new_cot_support`
- **日志配置**:`[logging]`
- `tty_enabled`:是否输出到终端 TTY(默认 `false`);关闭后仅写入日志文件
- **功能开关(可选)**:`[features]`
- `nagaagent_mode_enabled`:是否启用 NagaAgent 模式(开启后使用 `res/prompts/undefined_nagaagent.xml` 并暴露相关 Agent;关闭时使用 `res/prompts/undefined.xml` 并隐藏/禁用相关 Agent)
- **彩蛋(可选)**:`[easter_egg]`
- `keyword_reply_enabled`:是否启用群聊关键词自动回复(如“心理委员”,默认关闭)
- **Token 统计归档**:`[token_usage]`(默认 5MB,<=0 禁用)
- **Skills 热重载**:`[skills]`
- **Bilibili 视频提取**:`[bilibili]`
- `auto_extract_enabled`:是否启用自动提取(检测到 B 站链接/BV 号时自动下载并发送,默认关闭)
- `cookie`:B 站完整 Cookie 字符串(推荐,至少包含 `SESSDATA`,风控通过率更高)
- `prefer_quality`:首选清晰度(`80`=1080P, `64`=720P, `32`=480P)
- `max_duration`:最大视频时长限制(秒),超过则发送信息卡片(`0`=不限)
- `max_file_size`:最大文件大小限制(MB),超过则触发降级策略(`0`=不限)
- `oversize_strategy`:超限策略(`downgrade`=降低清晰度重试, `info`=发送封面+标题+简介)
- `auto_extract_group_ids` / `auto_extract_private_ids`:自动提取功能白名单(空=跟随全局 access)
- 系统依赖:需安装 `ffmpeg`
- **消息工具**:`[messages]`
- `send_text_file_max_size_kb`:`messages.send_text_file` 单文件文本发送大小上限(KB),默认 `512`(`0.5MB`)
- `send_url_file_max_size_mb`:`messages.send_url_file` URL 文件发送大小上限(MB),默认 `100`
- 建议:单文件、轻量任务优先用 `messages.send_text_file`;多文件工程或需要执行验证/打包交付优先用 `code_delivery_agent`
- **代理设置(可选)**:`[proxy]`
- **WebUI**:`[webui]`(默认 `127.0.0.1:8787`,密码默认 `changeme`,启动 `uv run Undefined-webui`)
管理员动态列表仍使用 `config.local.json`(自动读写)。
> 旧的`.env` 仍可作为临时兼容输入,但已不推荐使用。
>
> Windows 用户注意:`config.toml` 里的路径不要直接写 `D:\xxx\yyy`(反斜杠会被当作转义)。推荐用 `D:/xxx/yyy`,或用单引号:`'D:\xxx\yyy'`,或在双引号里写双反斜杠:`"D:\\xxx\\yyy"`。
WebUI 支持:配置分组表单快速编辑、Diff 预览、日志尾部查看(含自动刷新)。
#### 配置热更新说明
- 默认自动热更新:修改 `config.toml` 后,配置会自动生效
- 需重启生效的项(黑名单):`log_level`、`logging.file_path`、`logging.max_size_mb`、`logging.backup_count`、`logging.tty_enabled`、`onebot.ws_url`、`onebot.token`、`webui.url`、`webui.port`、`webui.password`
- 模型发车节奏:`models.*.queue_interval_seconds` 支持热更新并立即生效
#### 会话白名单示例
把机器人限定在 2 个群 + 1 个私聊(最常见的“安全上车”配置):
```toml
[access]
allowed_group_ids = [123456789, 987654321]
allowed_private_ids = [1122334455]
superadmin_bypass_allowlist = true
```
> 启动项目需要 OneBot 协议端,推荐使用 [NapCat](https://napneko.github.io/) 或 [Lagrange.Core](https://github.com/LagrangeDev/Lagrange.Core)。
### MCP 配置
Undefined 支持 **MCP (Model Context Protocol)** 协议,可以连接外部 MCP 服务器来无限扩展 AI 的能力(如访问文件系统、数据库、Git 等)。
1. 复制配置示例:`cp config/mcp.json.example config/mcp.json`
2. 编辑 `config/mcp.json`,添加你需要的 MCP 服务器。
3. 在 `config.toml` 中设置:`[mcp].config_path = "config/mcp.json"`
**示例:文件系统访问**
```json
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/files"]
}
}
}
```
更多资源请访问 [MCP 官方文档](https://modelcontextprotocol.io/) 或 [mcp.so](https://mcp.so) 发现更多服务器。
#### Agent 私有 MCP(可选)
除了全局 MCP 配置外,每个 agent 也支持单独的 MCP 配置文件。若存在,将在调用该 agent 时**临时加载**,并在调用结束后释放,工具仅对该 agent 可见(工具名为 MCP 原始名称,无额外前缀)。此方式无需设置 `MCP_CONFIG_PATH`。
- 路径:`src/Undefined/skills/agents/<agent_name>/mcp.json`
- 示例:`web_agent` 已预置 Playwright MCP(用于网页浏览/截图类能力)
```json
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"]
}
}
}
```
### Anthropic Skills 配置
支持 [agentskills.io](https://agentskills.io) 开放标准的 SKILL.md 文件,为 AI 注入领域知识。
**获取 Skills:**
- 官方仓库:[github.com/anthropics/skills](https://github.com/anthropics/skills)
- 社区收集:[agentskills.io](https://agentskills.io)
**放置位置:**
- 全局:`src/Undefined/skills/anthropic_skills/<skill-name>/SKILL.md`
- Agent 私有:`src/Undefined/skills/agents/<agent-name>/anthropic_skills/<skill-name>/SKILL.md`
## 使用说明
### 开始使用
1. 启动 OneBot 协议端(如 NapCat)并登录 QQ。
2. 配置好 `config.toml` 并启动 Undefined。
3. 连接成功后,机器人即可在群聊或私聊中响应。
### Agent 能力展示
机器人通过自然语言理解用户意图,自动调度相应的 Agent:
* **网络搜索**:"搜索一下 DeepSeek 的最新动态"
* **B站视频**:发送 B 站链接/BV 号自动下载发送视频,或指令 AI "下载这个 B 站视频 BV1xx411c7mD"
* **代码交付**:"用 Python 写一个 HTTP 服务器,监听 8080 端口,返回 Hello World,打包发到这个群"
* **定时任务**:"每天早上 8 点提醒我看新闻"
### 管理员命令
在群聊或私聊中使用以下指令(需要管理员权限):
```bash
/help # 查看帮助菜单
/lsadmin # 查看管理员列表
/addadmin <QQ> # 添加管理员(仅超级管理员)
/rmadmin <QQ> # 移除管理员
/bugfix <QQ> # 生成指定用户的 Bug 修复报告
/stats [时间范围] # Token 使用统计 + AI 分析(如 7d/30d/1w/1m)
```
`/stats` 说明:
- 默认统计最近 7 天,参数范围会自动钳制在 1-365 天
- 会生成图表并附带 AI 分析;若分析超时,会先发送图表和汇总,再给出超时提示
## 扩展与开发
Undefined 欢迎开发者参与共建!
### 目录结构
```
src/Undefined/
├── ai/ # AI 运行时(client、prompt、tooling、summary、多模态)
├── bilibili/ # B站视频解析、下载与发送
├── skills/ # 技能插件核心目录
│ ├── tools/ # 基础工具(原子化功能单元)
│ ├── toolsets/ # 工具集(分组工具)
│ ├── agents/ # 智能体(子 AI)
│ └── anthropic_skills/ # Anthropic Skills(SKILL.md 格式)
├── services/ # 核心服务 (Queue, Command, Security)
├── utils/ # 通用工具
├── handlers.py # 消息处理层
└── onebot.py # OneBot WebSocket 客户端
```
### 开发指南
请参考 [src/Undefined/skills/README.md](src/Undefined/skills/README.md) 了解如何编写新的工具和 Agent。
**callable.json 共享机制**:查看 [docs/callable.md](docs/callable.md) 了解如何让 Agent 之间相互调用,以及如何将 `skills/tools` 或 `skills/toolsets` 下的工具按白名单暴露给 Agent。
### 开发自检
```bash
uv run ruff format .
uv run ruff check .
uv run mypy .
```
## 文档与延伸阅读
- 架构说明:[`ARCHITECTURE.md`](ARCHITECTURE.md)
- Skills 总览:[`src/Undefined/skills/README.md`](src/Undefined/skills/README.md)
- Agents 开发:[`src/Undefined/skills/agents/README.md`](src/Undefined/skills/agents/README.md)
- Tools 开发:[`src/Undefined/skills/tools/README.md`](src/Undefined/skills/tools/README.md)
- Toolsets 开发:[`src/Undefined/skills/toolsets/README.md`](src/Undefined/skills/toolsets/README.md)
## 风险提示与免责声明
1. **账号风控与封禁风险(含 QQ 账号)**
本项目依赖第三方协议端(如 NapCat/Lagrange.Core)接入平台服务。任何因账号风控、功能限制、临时冻结或永久封禁造成的损失(含业务中断、数据损失、账号资产损失),均由实际部署方自行承担。
2. **敏感信息处理风险**
请勿使用本项目主动收集、存储、导出或传播密码、令牌、身份证件、银行卡、聊天隐私等敏感信息。因使用者配置不当、权限控制不足、日志泄露、二次开发缺陷或违规处理数据导致的信息泄露、合规处罚及连带损失,由使用者自行承担责任。
3. **合规义务归属**
使用者应确保其部署与运营行为符合所在地区法律法规、平台协议及群规(包括但不限于数据保护、隐私保护、网络安全与自动化使用限制)。项目维护者不对使用者的具体行为及后果承担连带责任。
## 致谢与友链
### NagaAgent
本项目集成 **NagaAgent** 子模块。Undefined 诞生于 NagaAgent 社区,感谢作者及社区的支持。
> [NagaAgent - A simple yet powerful agent framework.](https://github.com/Xxiii8322766509/NagaAgent)
## 开源协议
本项目遵循 [MIT License](LICENSE) 开源协议。
<div align="center">
<strong>⭐ 如果这个项目对您有帮助,请考虑给我们一个 Star</strong>
</div>
| text/markdown | null | Null <pylindex@qq.com> | null | null | null | null | [] | [] | null | null | <3.14,>=3.11 | [] | [] | [] | [
"aiofiles>=25.1.0",
"aiohttp>=3.13.2",
"apscheduler>=3.10.0",
"chardet>=5.2.0",
"crawl4ai>=0.3.0",
"croniter>=2.0.0",
"fastmcp>=2.14.4",
"httpx>=0.27.0",
"imgkit",
"langchain-community>=0.3.0",
"linkify-it-py>=2.0.3",
"lunar-python>=1.4.8",
"lxml>=5.4.0",
"markdown-it-py[plugins]>=4.0.0",
... | [] | [] | [] | [
"Repository, https://github.com/69gg/Undefined",
"Issues, https://github.com/69gg/Undefined/issues"
] | uv/0.10.4 {"installer":{"name":"uv","version":"0.10.4","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true} | 2026-02-21T05:15:52.578222 | undefined_bot-2.14.0.tar.gz | 1,692,847 | 82/92/ebba695ef4e9ae95a059eb82e0f04f64355bf8c308ddcee5a0d114f6d2bc/undefined_bot-2.14.0.tar.gz | source | sdist | null | false | db8f2c539b054d0a9843526b2cf06a18 | 1396bd1afdfd1f7518247bd43bbd9cac6f8a3502720a328ff085637fca05827d | 8292ebba695ef4e9ae95a059eb82e0f04f64355bf8c308ddcee5a0d114f6d2bc | null | [
"LICENSE"
] | 0 |
2.4 | dokku-api | 1.2.9 | A RESTful API for managing applications and resources on a Dokku platform. | # Dokku API
This is a RESTful API for managing applications and resources on Dokku, built with [FastAPI](https://fastapi.tiangolo.com/).
[](https://github.com/JeanExtreme002/Dokku-API/actions/workflows/ci.yml)
[](https://pypi.org/project/dokku-api/)
[](https://pypi.org/project/Dokku-API/)
[](https://pypi.org/project/dokku-api/)
[](https://pypi.org/project/dokku-api/)
### Installing Dokku API from PyPI:
```
$ pip install dokku-api
$ dokku-api help
```
## Getting Started (quick run)
The entire project has been built to run entirely on [Dokku](https://dokku.com/) or [Docker](https://www.docker.com/).
Create a `.env` from `.env.sample`, configure the variables, and execute one of the commands below to run the application:
```
# For installing and running the API as a Dokku application.
$ make dokku-install
# For installing and running the API on Docker.
$ make docker-run
```
Now, open the API on your browser at [http://dokku-api.yourdomain](http://dokku-api.yourdomain) — if you did not change the default settings.
Access [/docs](http://dokku-api.yourdomain/docs) for more information about the API.
## Getting Started (development)
Install the dependencies for the project:
```
$ pip install poetry
$ make install
```
Now, you can run the server with:
```
$ make run
```
Run `make help` to learn about more commands.
## Running Tests
The project has some tests to check if everything is working properly. To run the tests, execute the command below:
```
$ make test
$ make system-test
```
## Coding Style
Run the commands below to properly format the project's code:
```
$ make lint
$ make lint-fix
```
| text/markdown | JeanExtreme002 | jeanextreme002@gmail.com | null | null | MIT | dokku, api, paas, application, deployment, devops, automation | [
"License :: OSI Approved :: MIT License",
"Intended Audience :: Developers",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Scientific/Engineering",
"Topic :: Security",
... | [] | null | null | >=3.11 | [] | [] | [] | [
"PyNaCl==1.5.0",
"PyYAML==6.0",
"aiofiles>=24.1.0",
"aiomysql>=0.2.0",
"anyio==3.5.0",
"apscheduler<4.0.0,>=3.11.2",
"asgiref==3.5.0",
"asyncmy>=0.2.10",
"asyncssh==2.13.2",
"bcrypt==3.2.0",
"cffi==1.15.0",
"click==8.0.4",
"colorama==0.4.4",
"cryptography==36.0.1",
"fastapi==0.75.0",
"... | [] | [] | [] | [
"Documentation, https://github.com/JeanExtreme002/dokku-api",
"Homepage, https://github.com/JeanExtreme002/dokku-api",
"Repository, https://github.com/JeanExtreme002/dokku-api"
] | twine/6.2.0 CPython/3.11.0 | 2026-02-21T05:15:37.712994 | dokku_api-1.2.9.tar.gz | 96,201 | 1c/b7/c522e3f278febfcdc45488cb2f5870684d6830cfbdf3bd9f68495420cd9e/dokku_api-1.2.9.tar.gz | source | sdist | null | false | e9ade8583582a3bc9e58033a6cf84ffe | dc4e62ba381e1106324b01d70399f6fb76ae9fa7127d0203a3ac1d3f1059b3e1 | 1cb7c522e3f278febfcdc45488cb2f5870684d6830cfbdf3bd9f68495420cd9e | null | [
"LICENSE"
] | 227 |
2.4 | steer-opencell-design | 1.0.22 | STEER OpenCell Design - A Python package for designing and modeling battery cells. | # steer-opencell-design
A Python package for designing and modeling lithium-ion and sodium-ion battery cells. Part of the [STEER](https://github.com/stanford-developers) platform, `steer-opencell-design` provides a hierarchical, composable API for building virtual battery cells from raw materials up to complete cell assemblies, with built-in cost, mass, and electrochemical performance calculations.
## Features
- **Hierarchical cell modeling** — compose cells from materials → formulations → electrodes → assemblies → complete cells
- **Multiple cell formats** — cylindrical, prismatic, pouch, and flex-frame cell architectures
- **Multiple assembly types** — wound jelly rolls (round and flat), z-fold stacks, and punched stacks
- **Electrochemical curves** — half-cell voltage–capacity curves are combined into full-cell curves with N/P ratio control
- **Cost and mass breakdowns** — automatic roll-up of cost and mass from component level to cell level
- **Interactive visualization** — Plotly-based cross-sections, top-down views, capacity plots, and sunburst breakdowns
- **Serialization** — serialize and deserialize full cell configurations for storage and sharing
- **Database integration** — load reference materials and cell designs from the built-in database
## Installation
```bash
pip install steer-opencell-design
```
Requires Python >= 3.10. Dependencies (`steer-core`, `steer-materials`, `steer-opencell-data`) are installed automatically.
## Quickstart
The following example builds a complete cylindrical cell from scratch. The workflow follows the natural hierarchy: **Materials → Formulations → Electrodes → Layup → Assembly → Cell**.
```python
import steer_opencell_design as ocd
# ── 1. Materials ──────────────────────────────────────────────────
# Load active materials from the built-in database
cathode_active = ocd.CathodeMaterial.from_database("LFP")
cathode_active.specific_cost = 6 # $/kg
cathode_active.density = 3.6 # g/cm³
anode_active = ocd.AnodeMaterial.from_database("Synthetic Graphite")
anode_active.specific_cost = 4
anode_active.density = 2.2
# Create auxiliary materials
conductive_additive = ocd.ConductiveAdditive(
name="Super P", specific_cost=15, density=2.0, color="#000000"
)
binder = ocd.Binder(name="CMC", specific_cost=10, density=1.5, color="#FFFFFF")
# ── 2. Formulations ──────────────────────────────────────────────
cathode_formulation = ocd.CathodeFormulation(
active_materials={cathode_active: 95}, # weight %
binders={binder: 2},
conductive_additives={conductive_additive: 3},
)
anode_formulation = ocd.AnodeFormulation(
active_materials={anode_active: 90},
binders={binder: 5},
conductive_additives={conductive_additive: 5},
)
# ── 3. Current Collectors ────────────────────────────────────────
cc_material = ocd.CurrentCollectorMaterial(
name="Aluminum", specific_cost=5, density=2.7, color="#AAAAAA"
)
cathode_cc = ocd.NotchedCurrentCollector(
material=cc_material,
length=4500, # mm
width=300, # mm
thickness=8, # μm
tab_width=60, # mm
tab_spacing=200, # mm
tab_height=18, # mm
insulation_width=6, # mm
coated_tab_height=2, # mm
)
anode_cc = ocd.NotchedCurrentCollector(
material=cc_material,
length=4500, width=306, thickness=8,
tab_width=60, tab_spacing=100, tab_height=18,
insulation_width=6, coated_tab_height=2,
)
# ── 4. Electrodes ────────────────────────────────────────────────
insulation = ocd.InsulationMaterial.from_database("Aluminium Oxide, 99.5%")
cathode = ocd.Cathode(
formulation=cathode_formulation,
mass_loading=12, # mg/cm²
current_collector=cathode_cc,
calender_density=2.60, # g/cm³
insulation_material=insulation,
insulation_thickness=10, # μm
)
anode = ocd.Anode(
formulation=anode_formulation,
mass_loading=7.2,
current_collector=anode_cc,
calender_density=1.1,
insulation_material=insulation,
insulation_thickness=10,
)
# ── 5. Separator & Layup ─────────────────────────────────────────
separator_material = ocd.SeparatorMaterial(
name="Polyethylene", specific_cost=2, density=0.94,
color="#FDFDB7", porosity=45, # %
)
top_separator = ocd.Separator(material=separator_material, thickness=25, width=310, length=5000)
bottom_separator = ocd.Separator(material=separator_material, thickness=25, width=310, length=7000)
layup = ocd.Laminate(
anode=anode, cathode=cathode,
top_separator=top_separator, bottom_separator=bottom_separator,
)
# ── 6. Electrode Assembly ────────────────────────────────────────
mandrel = ocd.RoundMandrel(diameter=5, length=350)
tape_material = ocd.TapeMaterial.from_database("Kapton")
tape_material.density = 1.42
tape_material.specific_cost = 70
tape = ocd.Tape(material=tape_material, thickness=30)
jellyroll = ocd.WoundJellyRoll(
laminate=layup, mandrel=mandrel,
tape=tape, additional_tape_wraps=5,
)
# ── 7. Encapsulation ─────────────────────────────────────────────
aluminum = ocd.PrismaticContainerMaterial.from_database("Aluminum")
copper = ocd.PrismaticContainerMaterial.from_database("Copper")
encapsulation = ocd.CylindricalEncapsulation(
cathode_terminal_connector=ocd.CylindricalTerminalConnector(material=aluminum, thickness=2, fill_factor=0.8),
anode_terminal_connector=ocd.CylindricalTerminalConnector(material=copper, thickness=3, fill_factor=0.7),
lid_assembly=ocd.CylindricalLidAssembly(material=aluminum, thickness=4.0, fill_factor=0.9),
canister=ocd.CylindricalCanister(material=aluminum, outer_radius=21.4, height=330, wall_thickness=0.5),
)
# ── 8. Electrolyte & Cell ────────────────────────────────────────
electrolyte = ocd.Electrolyte(
name="1M LiPF6 in EC:DMC (1:1)",
density=1.2, specific_cost=15.0, color="#00FF00",
)
cell = ocd.CylindricalCell(
reference_electrode_assembly=jellyroll,
encapsulation=encapsulation,
electrolyte=electrolyte,
electrolyte_overfill=20, # %
)
# ── 9. Inspect Results ───────────────────────────────────────────
print(f"Energy: {cell.energy} Wh")
print(f"Mass: {cell.mass} g")
print(f"Specific energy: {cell.specific_energy} Wh/kg")
print(f"Volumetric energy: {cell.volumetric_energy} Wh/L")
print(f"Cost per energy: {cell.cost_per_energy} $/kWh")
# Visualize
cell.get_cross_section().show()
cell.get_capacity_plot().show()
cell.plot_mass_breakdown().show()
cell.plot_cost_breakdown().show()
```
## Package Overview
The package is organized into four layers that mirror the physical hierarchy of a battery cell:
```
Materials → Components → Constructions → Cells
```
### Materials (`steer_opencell_design.Materials`)
Raw materials and electrode formulations.
| Class | Description |
|---|---|
| `CathodeMaterial` / `AnodeMaterial` | Active materials with half-cell voltage–capacity curves |
| `Binder` | Electrode binder materials (e.g., PVDF, CMC) |
| `ConductiveAdditive` | Conductive additives (e.g., carbon black, Super P) |
| `CathodeFormulation` / `AnodeFormulation` | Blended electrode formulations with weight fractions |
| `Electrolyte` | Liquid electrolyte materials |
| `SeparatorMaterial` | Separator base material with porosity |
| `CurrentCollectorMaterial` | Metal foil material for current collectors |
| `TapeMaterial` | Adhesive tape material for winding termination |
| `InsulationMaterial` | Ceramic insulation coatings (e.g., Al₂O₃) |
| `PrismaticContainerMaterial` | Container housing materials (aluminum, steel) |
| `LaminateMaterial` | Laminate pouch film materials |
| `FlexFrameMaterial` | Flex-frame housing materials (e.g., PEEK) |
Most materials can be loaded from the built-in database:
```python
material = ocd.CathodeMaterial.from_database("NMC811")
binder = ocd.Binder.from_database("PVDF")
```
### Components (`steer_opencell_design.Components`)
Physical parts that make up a cell.
**Electrodes:**
| Class | Description |
|---|---|
| `Cathode` / `Anode` | Complete electrodes with formulation, current collector, and coating parameters |
**Current Collectors:**
| Class | Description |
|---|---|
| `NotchedCurrentCollector` | Notched foil for tabless wound cells |
| `TabWeldedCurrentCollector` | Foil with welded tab strips at specified positions |
| `TablessCurrentCollector` | Continuous foil with edge-based connections |
| `PunchedCurrentCollector` | Punched foil with integral tabs for stacked cells |
**Separators:**
| Class | Description |
|---|---|
| `Separator` | Porous separator membrane |
**Containers:**
| Class | Description |
|---|---|
| `CylindricalCanister`, `CylindricalLidAssembly`, `CylindricalTerminalConnector`, `CylindricalEncapsulation` | Cylindrical can components |
| `PrismaticCanister`, `PrismaticLidAssembly`, `PrismaticTerminalConnector`, `PrismaticEncapsulation` | Prismatic housing components |
| `PouchEncapsulation`, `LaminateSheet`, `PouchTerminal` | Pouch film components |
| `FlexFrame`, `FlexFrameEncapsulation` | Flex-frame housing components |
### Constructions (`steer_opencell_design.Constructions`)
Higher-level assemblies that combine components.
**Layups** — define how electrode layers are arranged:
| Class | Description |
|---|---|
| `Laminate` | Two-separator layup for wound cells (top + bottom separator sandwiching cathode and anode) |
| `MonoLayer` | Single-separator layup for stacked cells |
| `ZFoldMonoLayer` | Z-fold separator variant of MonoLayer |
**Electrode Assemblies** — define how layups are assembled:
| Class | Description |
|---|---|
| `WoundJellyRoll` | Cylindrical (round) wound jelly roll |
| `FlatWoundJellyRoll` | Flat (racetrack) wound jelly roll for prismatic cells |
| `ZFoldStack` | Z-fold stacked electrode assembly |
| `PunchedStack` | Punched/stacked electrode assembly |
**Cells** — complete battery cells:
| Class | Description |
|---|---|
| `CylindricalCell` | Cylindrical cell (e.g., 18650, 21700, 4680) |
| `PrismaticCell` | Prismatic hard-case cell |
| `PouchCell` | Pouch (soft-pack) cell |
| `FlexFrameCell` | Flex-frame cell for solid-state designs |
### Utilities
| Class/Function | Description |
|---|---|
| `NPRatioControlMode` | Enum controlling how N/P ratio adjustments propagate |
| `OverhangControlMode` | Enum controlling electrode overhang behavior |
| `RoundMandrel` / `FlatMandrel` | Winding mandrel geometry for jelly roll assembly |
| `Tape` | Termination tape for wound assemblies |
## Units Convention
| Quantity | Unit |
|---|---|
| Length, width, height | mm |
| Thickness (coatings, foils, separators, tapes) | μm |
| Mass loading | mg/cm² |
| Density | g/cm³ |
| Specific cost | $/kg |
| Porosity, weight fractions | % |
| Energy | Wh |
| Mass (cell-level) | g |
| Cost (cell-level) | $ |
| Specific energy | Wh/kg |
| Volumetric energy density | Wh/L |
| Cost per energy | $/kWh |
## Propagating Changes Through the Hierarchy
`steer-opencell-design` uses a hierarchical object model where child components are nested inside parent components:
```
Cell
└── ElectrodeAssembly (JellyRoll, Stack)
└── Layup (Laminate, MonoLayer)
├── Cathode
│ ├── Formulation
│ │ └── ActiveMaterials, Binders, etc.
│ └── CurrentCollector
├── Anode
│ ├── Formulation
│ └── CurrentCollector
└── Separators
```
When you modify a property deep in the hierarchy (e.g., changing the cathode's mass loading), parent objects need to recalculate their derived properties. There are two methods to handle this:
### Method 1: `propagate_changes()` (Recommended)
The simplest approach is to modify the property and then call `propagate_changes()` on that object. This bubbles the recalculation up through all parent objects automatically:
```python
# Modify a property low in the hierarchy
cell.reference_electrode_assembly.layup.cathode.mass_loading = 15
# Propagate changes up to the cell level
cell.reference_electrode_assembly.layup.cathode.propagate_changes()
# Now the cell's energy, mass, cost, etc. are all updated
print(cell.energy) # Reflects the new mass loading
```
You can call `propagate_changes()` from any level in the hierarchy:
```python
# Modify current collector thickness
cell.reference_electrode_assembly.layup.cathode.current_collector.thickness = 12
# Propagate from the current collector level - goes through:
# CurrentCollector → Cathode → Layup → JellyRoll → Cell
cell.reference_electrode_assembly.layup.cathode.current_collector.propagate_changes()
```
### Method 2: `update()` (Single Level)
If you only need to recalculate a single object without propagating to parents, use `update()`:
```python
# Recalculate just the cathode's properties
cathode.update()
```
This is useful when making multiple changes before triggering a full recalculation, or when working with standalone components not yet attached to a parent.
### Method 3: Re-assignment (Manual Propagation)
You can also trigger recalculation by re-assigning each component through its parent's setter:
```python
# Modify the active material
cell.reference_electrode_assembly.layup.cathode.formulation.active_material_1 = new_material
# Propagate changes up the hierarchy by re-assigning each level
cell.reference_electrode_assembly.layup.cathode.formulation = (
cell.reference_electrode_assembly.layup.cathode.formulation
)
cell.reference_electrode_assembly.layup.cathode = (
cell.reference_electrode_assembly.layup.cathode
)
cell.reference_electrode_assembly.layup = (
cell.reference_electrode_assembly.layup
)
cell.reference_electrode_assembly = (
cell.reference_electrode_assembly
)
```
This approach gives you explicit control but is more verbose. The `propagate_changes()` method is generally preferred.
### After Deserialization
When loading a cell from serialized data or a database, parent references may not be established automatically. The `propagate_changes()` method still works - it simply stops at the first level without a parent. To ensure full propagation after deserialization, changes will propagate correctly as you access and modify nested components.
```python
# Load from database
cell = ocd.CylindricalCell.from_database(table_name="cell_references", name="My Cell")
# Modify and propagate - works correctly
cell.reference_electrode_assembly.layup.cathode.mass_loading = 14
cell.reference_electrode_assembly.layup.cathode.propagate_changes()
```
## Serialization
Cells can be serialized and deserialized for storage:
```python
# Save
data = cell.serialize()
# Restore
restored_cell = ocd.CylindricalCell.deserialize(data)
```
## Loading from Database
Reference cells and materials can be loaded from the built-in database:
```python
cell = ocd.CylindricalCell.from_database(
table_name="cell_references",
name="LFP Cylindrical Tabless Cell"
)
```
## Testing
```bash
# Run all tests
pytest
# Run a specific test
pytest -k test_cells
```
## Development
```bash
# Format Python code
black .
isort .
```
## Citation
If you use this software in your research, please cite it using the metadata in `CITATION.cff`.
## License
MIT License. See [LICENCE.txt](LICENCE.txt) for details.
| text/markdown | null | Nicholas Siemons <nsiemons@stanford.edu> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"steer-core==0.1.45",
"steer-materials==0.1.29",
"steer-opencell-data==0.0.21"
] | [] | [] | [] | [
"Homepage, https://github.com/stanford-developers/steer-opencell-design/",
"Repository, https://github.com/stanford-developers/steer-opencell-design/"
] | twine/6.1.0 CPython/3.11.4 | 2026-02-21T05:13:59.210424 | steer_opencell_design-1.0.22.tar.gz | 288,778 | c5/83/91562660197c26b6b22ced3e783d12af8724a44e2fbc362ec4b21d98fbd7/steer_opencell_design-1.0.22.tar.gz | source | sdist | null | false | e98b26cfc7f6e0e43da7e4a046fd1581 | 2cc27f6859d186ea652307746b92cad34d90b80d6a818abbef88b01f04034540 | c58391562660197c26b6b22ced3e783d12af8724a44e2fbc362ec4b21d98fbd7 | null | [
"LICENCE.txt"
] | 233 |
2.4 | nfelib | 2.4.1 | nfelib: electronic invoicing library for Brazil | nfelib - bindings Python para e ler e gerir XML de NF-e, NFS-e nacional, CT-e, MDF-e, BP-e
==========================================================================================
<p align="center">
<a href="https://akretion.com/pt-BR" >
<img src="https://raw.githubusercontent.com/akretion/nfelib/master/ext/nfelib.jpg"/>
</a>
</p>
<p align="center">
<a href="https://codecov.io/gh/akretion/nfelib" >
<img src="https://codecov.io/gh/akretion/nfelib/branch/master/graph/badge.svg?token=IqcCHJzhuw"/>
</a>
<a href="https://pypi.org/project/nfelib/"><img alt="PyPI" src="https://img.shields.io/pypi/v/nfelib"></a>
<a href="https://pepy.tech/project/nfelib"><img alt="Downloads" src="https://pepy.tech/badge/nfelib"></a>
</p>
## Porque escolher a nfelib
* **Simples e confiável**. As outras bibliotecas costumam ter dezenas de milhares de linhas de código feito tudo manualmente para fazer o que o nfelib faz tudo automaticamente com algumas linhas para gerir código com o [xsdata](https://xsdata.readthedocs.io/) a partir dos últimos pacotes xsd da Fazenda. O xsdata é uma biblioteca de databinding extremamente bem escrita e bem testada. A própria nfelib tem testes para ler e gerir todos documentos fiscais.
* **Completa**: já que gerir os bindings ficou trivial, a nfelib mantém atualizada todos os bindings para interagir com todos os serviços e eventos de NF-e, NFS-e nacional, CT-e, MDF-e, BP-e. Os testes detetam também quando sai uma nova versão de algum esquema.
## Instalação
```bash
pip install nfelib
```
## Como usar
**NF-e**
```python
>>> # Ler o XML de uma NF-e:
>>> from nfelib.nfe.bindings.v4_0.proc_nfe_v4_00 import NfeProc
>>> nfe_proc = NfeProc.from_path("nfelib/nfe/samples/v4_0/leiauteNFe/NFe35200159594315000157550010000000012062777161.xml")
>>> # (pode ser usado também o metodo from_xml(xml) )
>>>
>>> nfe_proc.NFe.infNFe.emit.CNPJ
'59594315000157'
>>> nfe_proc.NFe.infNFe.emit
Tnfe.InfNfe.Emit(CNPJ='59594315000157', CPF=None, xNome='Akretion LTDA', xFant='Akretion', enderEmit=TenderEmi(xLgr='Rua Paulo Dias', nro='586', xCpl=None, xBairro=None, cMun='3501152', xMun='Alumínio', UF=<TufEmi.SP: 'SP'>, CEP='18125000', cPais=<TenderEmiCPais.VALUE_1058: '1058'>, xPais=<TenderEmiXPais.BRASIL: 'Brasil'>, fone='2130109965'), IE='755338250133', IEST=None, IM=None, CNAE=None, CRT=<EmitCrt.VALUE_1: '1'>)
>>> nfe_proc.NFe.infNFe.emit.enderEmit.UF.value
'SP'
>>>
>>> # Serializar uma NF-e:
>>> nfe_proc.to_xml()
'<?xml version="1.0" encoding="UTF-8"?>\n<nfeProc xmlns="http://www.portalfiscal.inf.br/nfe" versao="4.00">\n <NFe>\n <infNFe versao="4.00" Id="35200159594315000157550010000000012062777161">\n <ide>\n <cUF>35</cUF>\n <cNF>06277716</cNF>\n <natOp>Venda</natOp>\n <mod>55</mod>\n <serie>1</serie>\n <nNF>1</nNF>\n <dhEmi>2020-01-01T12:00:00+01:00</dhEmi>\n <dhSaiEnt>2020-01-01T12:00:00+01:00</dhSaiEnt>\n <tpNF>1</tpNF>\n <idDest>1</idDest>\n [...]
>>>
>>> # Montar uma NFe do zero:
>>> from nfelib.nfe.bindings.v4_0.nfe_v4_00 import Nfe
>>> nfe=Nfe(infNFe=Nfe.InfNfe(emit=Nfe.InfNfe.Emit(xNome="Minha Empresa", CNPJ='59594315000157')))
>>> nfe
Nfe(infNFe=Tnfe.InfNfe(ide=None, emit=Tnfe.InfNfe.Emit(CNPJ='59594315000157', CPF=None, xNome='Minha Empresa', xFant=None, enderEmit=None, IE=None, IEST=None, IM=None, CNAE=None, CRT=None), avulsa=None, dest=None, retirada=None, entrega=None, autXML=[], det=[], total=None, transp=None, cobr=None, pag=None, infIntermed=None, infAdic=None, exporta=None, compra=None, cana=None, infRespTec=None, infSolicNFF=None, versao=None, Id=None), infNFeSupl=None, signature=None)
>>>
>>> # Validar o XML de uma nota:
>>> nfe.validate_xml()
["Element '{http://www.portalfiscal.inf.br/nfe}infNFe': The attribute 'versao' is required but missing.", "Element '{http://www.portalfiscal.inf.br/nfe}infNFe': The attribute 'Id' is required but missing." [...]
```
Assinar o XML de uma nota usando a lib [erpbrasil.assinatura](https://github.com/erpbrasil/erpbrasil.assinatura) (funciona com os outros documentos eletrônicos também)
```
>>> # Assinar o XML de uma nota:
>>> with open(path_to_your_pkcs12_certificate, "rb") as pkcs12_buffer:
pkcs12_data = pkcs12_buffer.read()
>>> signed_xml = nfe.sign_xml(xml, pkcs12_data, cert_password, nfe.NFe.infNFe.Id)
```
Imprimir o DANFE usando a lib [BrazilFiscalReport](https://github.com/Engenere/BrazilFiscalReport) ou a lib [erpbrasil.edoc.pdf](https://github.com/erpbrasil/erpbrasil.edoc.pdf) (futuramente BrazilFiscalReport deve imprimir o pdf de outros documentos eletrônicos também; erpbrasil.edoc.pdf é uma lib mais 'legacy')
```
>>> # Imprimir o pdf de uma nota usando BrazilFiscalReport:
>>> pdf_bytes = nfe.to_pdf()
>>> # Imprimir o pdf de uma nota usando erpbrasil.edoc.pdf:
>>> pdf_bytes = nfe.to_pdf(engine="erpbrasil.edoc.pdf")
>>> # Ou então para imprimir e assinar junto:
>>> pdf_bytes = nfe.to_pdf(
pkcs12_data=cert_data,
pkcs12_password=cert_password,
doc_id=nfe.NFe.infNFe.Id,
)
```
**NFS-e padrão nacional**
```python
>>> # Ler uma NFS-e:
>>>> from nfelib.nfse.bindings.v1_0.nfse_v1_00 import Nfse
>>> nfse = Nfse.from_path("alguma_nfse.xml")
>>>
>>> # Serializar uma NFS-e:
>>> nfse.to_xml()
>>> # Ler uma DPS:
>>>> from nfelib.nfse.bindings.v1_0.dps_v1_00 import Dps
>>> dps = Nfse.from_path("nfelib/nfse/samples/v1_0/GerarNFSeEnvio-env-loterps.xml")
>>>
>>> # Serializar uma DPS:
>>> dps.to_xml()
```
**MDF-e**
```python
>>> # Ler um MDF-e:
>>>> from nfelib.mdfe.bindings.v3_0.mdfe_v3_00 import Mdfe
>>> mdfe = Mdfe.from_path("nfelib/mdfe/samples/v3_0/ComPagtoPIX_41210780568835000181580010402005751006005791-procMDFe.xml")
>>>
>>> # Serializar um MDF-e:
>>> mdfe.to_xml()
```
**CT-e**
```python
>>> # Ler um CT-e:
>>>> from nfelib.cte.bindings.v4_0.cte_v4_00 import Cte
>>> cte = Cte.from_path("nfelib/cte/samples/v4_0/43120178408960000182570010000000041000000047-cte.xml")
>>>
>>> # Serializar um CT-e:
>>> cte.to_xml()
```
**BP-e**
```python
>>> # Ler um BP-e:
>>>> from nfelib.bpe.bindings.v1_0.bpe_v1_00 import Bpe
>>> bpe = Bpe.from_path("algum_bpe.xml")
>>>
>>> # Serializar um BP-e:
>>> bpe.to_xml()
```
## Desenvolvimento / testes
Para rodar os testes:
```bash
pytest
```
Para atualizar os bindings:
1. baixar o novo zip dos esquemas e atualizar a pasta ```nfelib/<nfe|nfse|cte|mdfe|bpe>/schemas/<versao>/```
2. gerir os bindings de um pacote de esquemas xsd, por examplo da NF-e:
```bash
xsdata generate nfelib/nfe/schemas/v4_0 --package nfelib.nfe.bindings.v4_0
```
Para gerir todos bindings com xsdata:
```bash
./script.sh
```
## Versões dos esquemas e pastas
A nfelib usa apenas 2 dígitos para caracterizar a versão. Isso foi decidido observando que a Fazenda nunca usa o terceiro dígito da versão e que a mudança do segundo dígito já caracteriza uma mudança maior. Nisso qualquer alteração de esquema que não mudar o primeiro nem o segundo dígito da versão do esquema vai na mesma pasta e se sobreponha a antiga versão, assumindo que é possível usar o mais novo esquema no lugar do antigo (por exemplo é possível ler uma NFe 4.00 pacote nº 9j (NT 2022.003 v.1.00b) com os bindings da versão nfe pacote nº 9k (NT 2023.001 v.1.20).
Pelo contrário, caso houver uma mudança maior afetando os 2 primeiros dígitos como NFe 3.0 e NFe 3.1 ou NFe 3.1 e NFe 4.0, será possível também suportar as várias versões ao mesmo tempo usando pastas diferentes. Assim seria possível por exemplo emitir a futura NFe 5.0 e ainda importar uma NFe 4.0.
| text/markdown | null | Raphaël Valyi <raphael.valyi@akretion.com.br> | null | null | MIT | e-invoicing, ERP, Odoo, NFe, CTe, MDFe, BPe, NFSe | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: P... | [] | null | null | >=3.8 | [] | [] | [] | [
"lxml",
"xsdata",
"erpbrasil.assinatura; extra == \"sign\"",
"brazilfiscalreport; extra == \"pdf\"",
"xsdata[soap]; extra == \"soap\"",
"erpbrasil.assinatura; extra == \"soap\"",
"brazil-fiscal-client; extra == \"soap\"",
"pre-commit; extra == \"test\"",
"pytest; extra == \"test\"",
"pytest-benchm... | [] | [] | [] | [
"Homepage, https://github.com/akretion/nfelib",
"Source, https://github.com/akretion/nfelib",
"Documentation, https://nfelib.readthedocs.io/",
"Changelog, https://nfelib.readthedocs.io/en/latest/changelog/"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:13:47.574762 | nfelib-2.4.1.tar.gz | 603,845 | 94/47/8f637872b5856ee89cb8b40e2118037adf7d3baf710d830e85e35dfc3ba5/nfelib-2.4.1.tar.gz | source | sdist | null | false | c78ff01cb2a158550d1e01dce1ae9753 | abdd90c1b795ebf3c774edc4d16726ad3c9476308a9247f01b451fb6ccdf3af3 | 94478f637872b5856ee89cb8b40e2118037adf7d3baf710d830e85e35dfc3ba5 | null | [
"MIT-LICENSE"
] | 695 |
2.4 | agentnexus-tools | 0.1.0 | Automatically convert any API into AI-ready tools in 30 seconds | # Agent Nexus
**Automatically convert any API into AI-ready tools in 30 seconds**
Agent Nexus eliminates the need for manual API integration code. Point it at any API URL and get production-ready Python tools that AI agents can use immediately.
## What It Does
Developers spend hours writing integration code for every API their AI agents need to use. Agent Nexus automates this entirely.
Give it an API URL → Get working Python code + searchable catalog entry in under 30 seconds.
## How It Works
Agent Nexus uses 4 specialized agents working together:
1. **API Introspector** - Discovers API endpoints automatically (OpenAPI spec or intelligent probing)
2. **Tool Generator** - Creates clean Python integration code with authentication
3. **Catalog Search** - Indexes tools with AI embeddings for semantic search
4. **Tool Orchestrator** - Coordinates multi-tool workflows
## Tech Stack
- Python 3.11+
- Elasticsearch 8.15+ (storage, vector search, ES|QL)
- sentence-transformers (AI embeddings)
- Click (CLI)
- Docker & Docker Compose
## Installation
```bash
# Install from PyPI
pip install agent-nexus
# Or install from source
git clone https://github.com/lcgani/agent-nexus
cd agent-nexus
pip install -e .
```
## Quick Start
```bash
# 1. Start Elasticsearch (optional - use --skip-index for faster generation)
docker-compose up -d
# 2. Generate your first tool
agent-nexus generate https://api.github.com
# 3. Use the generated tool
python your_script.py
```
## Usage Examples
### Generate Tool from Any API
```bash
# GitHub
agent-nexus generate https://api.github.com
# Stripe
agent-nexus generate https://api.stripe.com
# Fast mode (skip Elasticsearch indexing)
agent-nexus generate https://api.github.com --skip-index
```
### Search with Natural Language
```bash
# Find payment APIs
python -m src.cli search "payment processing credit cards"
# Find weather APIs
python -m src.cli search "weather forecast temperature"
# Find code hosting APIs
python -m src.cli search "git repositories version control"
```
### Use Generated Tools
```python
# Import auto-generated tool
exec(open('generated_tools/api.github.com.py').read())
github = ApiGithubCom()
# Make API calls
import requests
response = requests.get(
f"{github.base_url}/users/octocat",
headers=github._headers()
)
print(response.json())
```
## Architecture
```
API URL Input
↓
Agent 1: Introspector (discovers endpoints)
↓
Elasticsearch (stores discovery data)
↓
Agent 2: Generator (creates Python code)
↓
Elasticsearch (stores tool + embeddings)
↓
Agent 3: Search (semantic search)
↓
Agent 4: Orchestrator (multi-tool workflows)
```
## Project Structure
```
agent-nexus/
├── src/
│ ├── agents/
│ │ ├── introspector.py # API discovery
│ │ ├── generator.py # Code generation
│ │ ├── search.py # Vector search
│ │ └── orchestrator.py # Workflow coordination
│ ├── elasticsearch/
│ │ ├── client.py # ES connection
│ │ └── schemas.py # Index mappings
│ ├── cli.py # CLI interface
│ └── config.py # Configuration
├── generated_tools/ # Auto-generated tools
├── docker-compose.yml # Elasticsearch setup
├── requirements.txt
└── README.md
```
## Performance Targets
- Tool generation: <30 seconds per API
- Search relevance: >90% accuracy
- Catalog size: 50+ tools tested
- End-to-end: <1 minute from URL to working tool
## Commands
```bash
# Setup Elasticsearch indexes
python -m src.cli setup
# Generate tool from API
python -m src.cli generate <API_URL>
# Search tool catalog
python -m src.cli search "<query>"
```
## License
Apache-2.0
| text/markdown | Agent Nexus Contributors | null | null | null | Apache-2.0 | ai, api, tools, agents, automation, elasticsearch | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Mod... | [] | null | null | >=3.11 | [] | [] | [] | [
"click>=8.1.7",
"elasticsearch>=8.15.0",
"requests>=2.31.0",
"python-dotenv>=1.0.0",
"jinja2>=3.1.2",
"sentence-transformers>=2.2.2",
"pyyaml>=6.0.1",
"pytest>=7.4.0; extra == \"dev\"",
"black>=23.7.0; extra == \"dev\"",
"flake8>=6.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/lcgani/agent-nexus",
"Repository, https://github.com/lcgani/agent-nexus",
"Issues, https://github.com/lcgani/agent-nexus/issues"
] | twine/6.2.0 CPython/3.13.3 | 2026-02-21T05:13:26.543708 | agentnexus_tools-0.1.0.tar.gz | 10,828 | a6/31/8b7586ac43bdb5e5f51190abccab2eb1dc7f0a01c4ed0c9b35ca7b2f08bc/agentnexus_tools-0.1.0.tar.gz | source | sdist | null | false | b12e331fad443af4cfb2d69e1b7d3d31 | 27a92f1b7fdd4c9f8303de5d8d8700a3162ad25b45b9fabba1128a64ab0b35a4 | a6318b7586ac43bdb5e5f51190abccab2eb1dc7f0a01c4ed0c9b35ca7b2f08bc | null | [
"LICENSE"
] | 245 |
2.4 | aino | 2.2.0 | AINO Is Neural Operation: A lightweight, educational neural network library. | # AINO (Aino is Neural Operation)

> **"Aino is Neural Operation."** > A custom-built, highly optimized Deep Learning framework built from scratch using pure Python, and NumPy.
[](https://badge.fury.io/py/aino)
[](https://opensource.org/licenses/MIT)
---
## 🌟 Inspiration
This project was born out of curiosity after watching this inspiring video:
[**MIT Introduction to Deep Learning | 6.S191**](https://youtu.be/alfdI7S6wCY?si=MPqH4F2EiP3U67t-)
I didn't want to just `import tensorflow` and call it a day. I wanted to see **directly** how the magic happens under the hood. I wanted to feel the weight of the matrices, understand the flow of the gradients, and build the brain from scratch.
## ⚡ The Tech Stack (Hardware Agnostic)
AINO is built to be educational yet blazingly fast. It uses an **Agnostic Backend**:
* **CPU Mode:** Uses pure `NumPy` with contiguous memory optimization.
* **GPU Mode:** Automatically detects and switches to `CuPy` if an NVIDIA GPU is available, providing massive parallel acceleration without changing a single line of your code.
* **No Black Boxes:** Every Forward Pass and Backpropagation step is manually calculated.
## ✨ Features
* **Flexible Architecture:** Define any number of layers and neurons (e.g., `[784, 128, 64, 10]`).
* **Vectorized Operations:** Dropped slow loop-based perceptrons in favor of highly optimized matrix multiplications.
* **Mini-Batch Gradient Descent:** Train on large datasets (like MNIST) efficiently.
* **Activation Functions:** Supports `Sigmoid`, `ReLU`, and `Tanh`.
* **Universal Serialization:** Safely save (`.dit`) and load models across different machines, whether they have a GPU or not.
## 🧠 What I Learned
Building AINO from the ground up gave me insights that high-level libraries often hide:
1. **From OOP to Vectorization:** I initially built the network iterating over individual Perceptrons. I quickly learned that Python loops are slow. Refactoring the `Layer` class to use pure Matrix Calculus (`np.dot`) reduced training time from 32 minutes to just 19 seconds!
2. **The Calculus of Backpropagation:**
I implemented the **Chain Rule** manually, computing derivatives for activations and understanding how error gradients propagate from the output back to the input layers.
3. **Memory Management & Hardware:**
I learned the critical difference between RAM and VRAM, how to use `ascontiguousarray` for CPU caching, and how to safely bridge data between CPU and GPU using `CuPy`.
## 💻 Usage Example
```python
from aino.model import NeuralNetwork
# Create a network for MNIST (784 inputs, 2 hidden layers, 10 outputs)
model = NeuralNetwork([784, 128, 64, 10], activation_type='tanh')
# Train using Mini-Batch Gradient Descent (Auto CPU/GPU)
model.fit(X_train, y_train, epochs=100, n=0.01, batch_size=32, verbose=True)
# Make predictions
predictions = model.predict(X_test)
# Save the universally loadable .dit model
model.save('aino_mnist.dit')
```
Built with ❤️ using Python. | text/markdown | null | Arufa <radityaalfarisi6@gmail.com> | null | null | MIT | educational, machine-learning, neural-network, numpy, perceptron | [
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"numpy",
"cupy-cuda12x; extra == \"gpu\""
] | [] | [] | [] | [
"Homepage, https://github.com/arufadesuwa/AINO",
"Repository, https://github.com/arufadesuwa/AINO.git",
"Bug Tracker, https://github.com/arufadesuwa/AINO/issues"
] | uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Arch Linux","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T05:13:15.775841 | aino-2.2.0.tar.gz | 171,094 | 37/f4/832ab6d21c18b16b8f1ed6d50a8837830cb6ee248048d0347acda003f217/aino-2.2.0.tar.gz | source | sdist | null | false | 6683755e1900c4f80c487146502cd497 | 3db29b1785503e395e9f735e76c06cb182a89e7fd443d8865b71fd2fdb134c85 | 37f4832ab6d21c18b16b8f1ed6d50a8837830cb6ee248048d0347acda003f217 | null | [] | 263 |
2.4 | steer-opencell-data | 0.0.21 | STEER OpenCell Data - Data feed for STEER OpenCell | # steer-opencell-data
Data feed for STEER OpenCell. Contains the SQLite reference database (`database.db`) and a CLI tool for migrating records to the AWS backend.
## Install
```bash
pip install -e .
```
## CLI Migration Tool
Migrate individual records from the local SQLite database to AWS (DynamoDB + S3). Useful when new materials or cells are created locally (e.g. via Jupyter) and need to be published.
### Prerequisites
```bash
# Install with CLI dependencies
pip install -e ".[cli]"
# Configure AWS credentials
aws configure
# or set AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION
```
### Usage
**Interactive** (prompts you through each step):
```bash
python -m steer_opencell_data.cli.migrate_record
```
**Non-interactive** (for scripting):
```bash
python -m steer_opencell_data.cli.migrate_record --table cathode_materials --name LFP --yes
```
**Dry run** (preview without writing to AWS):
```bash
python -m steer_opencell_data.cli.migrate_record --dry-run
```
### Options
| Flag | Description |
| --- | --- |
| `--table TABLE` | Skip table selection prompt |
| `--name NAME` | Skip record selection prompt |
| `--yes, -y` | Skip confirmation prompt |
| `--dry-run` | Preview without writing to AWS |
| `--sqlite-path PATH` | Override database path (default: package's `database.db`) |
| `--verbose, -v` | Verbose logging |
### Environment Variables
| Variable | Default | Description |
| --- | --- | --- |
| `DYNAMODB_TABLE` | `opencell-production` | Target DynamoDB table |
| `S3_BUCKET` | `opencell-production-objects` | Target S3 bucket |
| `AWS_REGION` | `us-east-2` | AWS region |
| text/markdown | null | Nicholas Siemons <nsiemons@stanford.edu> | null | null | MIT | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"steer-core==0.1.45",
"boto3>=1.35.0; extra == \"cli\""
] | [] | [] | [] | [
"Homepage, https://github.com/stanford-developers/steer-opencell-data/",
"Repository, https://github.com/stanford-developers/steer-opencell-data/"
] | twine/6.1.0 CPython/3.11.4 | 2026-02-21T05:12:53.587188 | steer_opencell_data-0.0.21.tar.gz | 76,854,539 | 76/14/aeb9548d303cae0c93a8757c48e00a1e89cb5590b71ee726f36343ed088c/steer_opencell_data-0.0.21.tar.gz | source | sdist | null | false | 70729e821ab53d3ea21822f7f87fd61d | a7936ca7e391ec9b65145e5fbb2a1518badc3f11d1c58c93f688912b0a419095 | 7614aeb9548d303cae0c93a8757c48e00a1e89cb5590b71ee726f36343ed088c | null | [
"LICENCE.txt"
] | 241 |
2.4 | wagtail-reusable-blocks | 0.8.2 | Reusable content blocks with slot-based templating for Wagtail CMS | # wagtail-reusable-blocks
[](https://badge.fury.io/py/wagtail-reusable-blocks)
[](https://pepy.tech/project/wagtail-reusable-blocks)
[](https://djangopackages.org/packages/p/wagtail-reusable-blocks/)
[](https://github.com/kkm-horikawa/wagtail-reusable-blocks/actions/workflows/ci.yml)
[](https://codecov.io/gh/kkm-horikawa/wagtail-reusable-blocks)
[](https://opensource.org/licenses/BSD-3-Clause)
## Philosophy
> "The best user interface for a programmer is usually a programming language."
> — [The Zen of Wagtail](https://docs.wagtail.org/en/stable/getting_started/the_zen_of_wagtail.html)
We wholeheartedly embrace Wagtail's philosophy. Wagtail provides powerful systems like StreamField and StructBlock while keeping the core lightweight—free from features that may be unnecessary for some users. Many developers choose Wagtail over WordPress precisely because of this design philosophy.
However, through building Wagtail sites, we discovered a practical limitation: **Wagtail excels at repository-level implementation, but the admin interface can become rigid** when dealing with shared layouts.
For example, if you create a block for a sidebar or header used across pages, it becomes difficult to customize portions of that block on a per-page basis. As we focused more on UX, we noticed our block definitions multiplying and field counts exploding.
This led to a realization: **Just as code is the best interface for developers, HTML is the most flexible interface for content layouts in the admin.** If editors could write flexible layouts in HTML and inject dynamic content (images, rich text) into specific areas, that would be the ultimate Wagtail editing experience.
That's why we built this library.
Programmers want to keep their repositories clean. They don't want to modify block definitions and risk deployments for minor layout tweaks. With wagtail-reusable-blocks, you can bring the flexibility of programming—Wagtail's core strength—directly into the admin interface.
**Write layouts in HTML. Fill slots with content. Deploy zero code changes.**
## Key Features
- ✅ **Zero-code setup** - Works out of the box, no configuration required
- ✅ **Searchable** - Built-in search in snippet chooser modal
- ✅ **Nested blocks** - Reusable blocks can contain other reusable blocks
- ✅ **Circular reference detection** - Prevents infinite loops automatically
- ✅ **Auto-generated slugs** - Slugs created automatically from names
- ✅ **Admin UI** - Search, filter, copy, and inspect blocks
- ✅ **StreamField support** - RichTextBlock and RawHTMLBlock by default
- ✅ **Customizable** - Extend with your own block types
- ✅ **Slot-based templating** (v0.2.0+) - Reusable layouts with fillable slots
- ✅ **Dynamic slot selection** (v0.2.0+) - Auto-populated dropdown for slot IDs
- ✅ **Revision history** (v0.3.0+) - Track changes and restore previous versions
- ✅ **Draft/Publish workflow** (v0.3.0+) - Save drafts before publishing
- ✅ **Locking** (v0.3.0+) - Prevent concurrent editing conflicts
- ✅ **Approval workflows** (v0.3.0+) - Integration with Wagtail workflows
- ✅ **REST API** (v0.8.0+) - Wagtail API v2 read-only endpoint and DRF full CRUD endpoint
## Use Cases
### Content Reusability (v0.1.0+)
- **Headers/Footers**: Create once, use on all pages
- **Call-to-Action blocks**: Consistent CTAs across the site
- **Promotional banners**: Update in one place, reflect everywhere
- **Disclaimers**: Legal text that needs to be consistent
- **Contact forms**: Reusable form blocks
### Layout Reusability (v0.2.0+)
- **Page templates**: Two-column, three-column, hero sections
- **Card grids**: Product cards, team member cards, feature highlights
- **Article layouts**: Consistent article structure with custom content per page
- **Landing page sections**: Reusable section layouts with page-specific content
## Installation
```bash
pip install wagtail-reusable-blocks
```
Add to your `INSTALLED_APPS`:
```python
# settings.py
INSTALLED_APPS = [
# ...
'wagtail_reusable_blocks',
# ...
]
```
Run migrations:
```bash
python manage.py migrate
```
That's it! **Reusable Blocks** will now appear in your Wagtail admin under **Snippets**.
### Enhanced HTML Editing (Optional)
For a VS Code-like HTML editing experience with syntax highlighting, Emmet support, and fullscreen mode, install with the `editor` extra:
```bash
pip install wagtail-reusable-blocks[editor]
```
Then add `wagtail_html_editor` to your `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
# ...
'wagtail_reusable_blocks',
'wagtail_html_editor', # Add this for enhanced HTML editing
# ...
]
```
This enables [wagtail-html-editor](https://github.com/kkm-horikawa/wagtail-html-editor) for all HTML blocks with syntax highlighting, Emmet abbreviations, and fullscreen mode.
## REST API (v0.8.0+)
wagtail-reusable-blocks provides optional REST API support via two independent integrations.
### Installation
Install with the `api` extra to include both Wagtail API v2 and Django REST Framework:
```bash
pip install wagtail-reusable-blocks[api]
```
Then add the required apps to `INSTALLED_APPS`:
```python
INSTALLED_APPS = [
# ...
'wagtail_reusable_blocks',
'wagtail.api', # for Wagtail API v2 (read-only)
'rest_framework', # for DRF CRUD
# ...
]
```
### Quick Start
#### Option A: Wagtail API v2 (read-only)
Exposes published blocks as a standard Wagtail API v2 endpoint. Suitable for public content delivery (e.g., headless front-ends that only read content).
```python
# urls.py
from wagtail.api.v2.router import WagtailAPIRouter
from wagtail_reusable_blocks.api import ReusableBlockAPIViewSet
api_router = WagtailAPIRouter("wagtailapi")
api_router.register_endpoint("reusable-blocks", ReusableBlockAPIViewSet)
urlpatterns = [
# ...
path("api/v2/", api_router.urls),
]
```
#### Option B: DRF CRUD
Exposes full create/read/update/delete operations via Django REST Framework. Suitable for admin tools or internal services that need to manage blocks programmatically.
```python
# urls.py
from rest_framework.routers import DefaultRouter
from wagtail_reusable_blocks.api import ReusableBlockModelViewSet
router = DefaultRouter()
router.register("reusable-blocks", ReusableBlockModelViewSet)
urlpatterns = [
# ...
path("api/", include(router.urls)),
]
```
### API Endpoints
#### Wagtail API v2 (read-only)
| Method | Path | Description |
|--------|------|-------------|
| `GET` | `/api/v2/reusable-blocks/` | List published blocks |
| `GET` | `/api/v2/reusable-blocks/<id>/` | Retrieve a published block |
Only blocks with `live=True` are returned.
#### DRF CRUD
| Method | Path | Description |
|--------|------|-------------|
| `GET` | `/api/reusable-blocks/` | List blocks |
| `POST` | `/api/reusable-blocks/` | Create a block |
| `GET` | `/api/reusable-blocks/<id>/` | Retrieve a block |
| `PUT` | `/api/reusable-blocks/<id>/` | Replace a block |
| `PATCH` | `/api/reusable-blocks/<id>/` | Partially update a block |
| `DELETE` | `/api/reusable-blocks/<id>/` | Delete a block |
Supports query parameters: `?slug=<slug>`, `?live=true`, `?search=<text>`.
### Request / Response Examples
#### List blocks (DRF)
```
GET /api/reusable-blocks/
Authorization: Token <your-token>
```
```json
[
{
"id": 1,
"name": "Summer Sale Banner",
"slug": "summer-sale-banner",
"content": [
{
"type": "rich_text",
"value": "<p>Summer sale — 20% off everything!</p>",
"id": "abc123"
}
],
"live": true,
"created_at": "2025-06-01T09:00:00Z",
"updated_at": "2025-06-15T14:30:00Z"
}
]
```
#### Create a block (DRF)
```
POST /api/reusable-blocks/
Authorization: Token <your-token>
Content-Type: application/json
{
"name": "Contact Footer",
"content": [
{
"type": "rich_text",
"value": "<p>Contact us at hello@example.com</p>"
}
]
}
```
The `slug` field is auto-generated from `name` when omitted. The `live` field is read-only and managed through the Wagtail admin publish workflow.
### Configuration
API behaviour can be customised via `WAGTAIL_REUSABLE_BLOCKS` in your Django settings:
```python
WAGTAIL_REUSABLE_BLOCKS = {
# v0.8.0 settings - API
'API_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated',
],
'API_AUTHENTICATION_CLASSES': None, # None uses DRF DEFAULT_AUTHENTICATION_CLASSES
'API_FILTER_FIELDS': ['slug', 'live'],
'API_SEARCH_FIELDS': ['name', 'slug'],
}
```
| Setting | Default | Description |
|---------|---------|-------------|
| `API_PERMISSION_CLASSES` | `['rest_framework.permissions.IsAuthenticated']` | Permission classes for the DRF CRUD ViewSet |
| `API_AUTHENTICATION_CLASSES` | `None` (uses DRF defaults) | Authentication classes for the DRF CRUD ViewSet |
| `API_FILTER_FIELDS` | `['slug', 'live']` | Fields available for filtering |
| `API_SEARCH_FIELDS` | `['name', 'slug']` | Fields used for search queries |
**Note:** The Wagtail API v2 endpoint (`ReusableBlockAPIViewSet`) uses Wagtail's own authentication mechanism and is not affected by `API_PERMISSION_CLASSES` or `API_AUTHENTICATION_CLASSES`.
## Quick Start
### 1. Create a Reusable Block
1. Go to **Snippets > Reusable Blocks** in Wagtail admin
2. Click **Add Reusable Block**
3. Enter a name (slug is auto-generated)
4. Add content using RichTextBlock or RawHTMLBlock
5. Save
### 2. Use in Your Page Model
```python
from wagtail.models import Page
from wagtail.fields import StreamField
from wagtail.admin.panels import FieldPanel
from wagtail_reusable_blocks.blocks import ReusableBlockChooserBlock
class HomePage(Page):
body = StreamField([
('reusable_block', ReusableBlockChooserBlock()),
# ... other blocks
], blank=True, use_json_field=True)
content_panels = Page.content_panels + [
FieldPanel('body'),
]
```
### 3. Render in Template
```html
{% load wagtailcore_tags %}
{% for block in page.body %}
{% include_block block %}
{% endfor %}
```
That's it! The reusable block content will be rendered automatically.
## Choosing the Right Block
wagtail-reusable-blocks provides two block types for different use cases:
### ReusableBlockChooserBlock - Content Reusability (v0.1.0+)
**Use when:** You want to insert finished content that's shared across pages.
**Example:** A promotional banner that appears on multiple pages.
```python
from wagtail_reusable_blocks.blocks import ReusableBlockChooserBlock
body = StreamField([
('reusable_block', ReusableBlockChooserBlock()),
])
```
**Workflow:**
1. Create a ReusableBlock with complete content (text, images, CTAs)
2. Insert it into multiple pages
3. Update the block once, all pages reflect the change
**Best for:**
- Site-wide announcements
- Consistent call-to-action sections
- Legal disclaimers
- Contact information blocks
### ReusableLayoutBlock - Layout Reusability (v0.2.0+)
**Use when:** You want to reuse a layout template and fill it with page-specific content.
**Example:** A two-column layout where the sidebar is fixed but main content varies by page.
```python
from wagtail_reusable_blocks.blocks import ReusableLayoutBlock
body = StreamField([
('layout', ReusableLayoutBlock()),
])
```
**Workflow:**
1. Create a ReusableBlock with layout HTML containing `data-slot` attributes
2. Select the layout in your page
3. Fill each slot with page-specific content
4. Layout updates affect all pages, but content remains unique
**Best for:**
- Page templates (two-column, three-column, hero sections)
- Card grids with custom content per card
- Article layouts with consistent structure
- Landing page sections
**Note:** Slot detection URLs are registered automatically via Wagtail's `register_admin_urls` hook. No manual URL configuration is required.
## Slot-Based Templating Tutorial
### 1. Create a Layout Template
Go to **Snippets > Reusable Blocks** and create a new block:
**Name:** Two Column Layout
**Content:** Add an HTML block:
```html
<div class="container">
<div class="row">
<aside class="col-md-4">
<nav class="sidebar-nav">
<!-- Fixed navigation -->
<ul>
<li><a href="/">Home</a></li>
<li><a href="/about/">About</a></li>
</ul>
</nav>
<!-- Slot for custom sidebar content -->
<div data-slot="sidebar-extra" data-slot-label="Extra Sidebar Content">
<p>Default sidebar content</p>
</div>
</aside>
<main class="col-md-8">
<!-- Slot for main content -->
<div data-slot="main" data-slot-label="Main Content">
<p>Default main content</p>
</div>
</main>
</div>
</div>
```
**Slot attributes** (custom HTML attributes defined by this library):
- `data-slot="slot-id"` - **Required.** Unique identifier (e.g., "main", "sidebar-extra")
- `data-slot-label="Display Name"` - **Optional.** Human-readable label shown in admin
- Child elements - **Optional.** Default content displayed if slot is not filled
### 2. Use the Layout in a Page
```python
from wagtail.models import Page
from wagtail.fields import StreamField
from wagtail.admin.panels import FieldPanel
from wagtail_reusable_blocks.blocks import ReusableLayoutBlock
class ArticlePage(Page):
body = StreamField([
('layout', ReusableLayoutBlock()),
], use_json_field=True)
content_panels = Page.content_panels + [
FieldPanel('body'),
]
```
### 3. Fill Slots with Content
In the Wagtail admin page editor:
1. Add a "Reusable Layout" block to the body
2. Select "Two Column Layout" from the layout chooser
3. **Automatically**, the available slots appear as dropdowns:
- Slot: **Main Content** (dropdown)
- Slot: **Extra Sidebar Content** (dropdown)
4. Select "Main Content" and add your content:
- Rich Text: "This is my article about..."
- Image: article-image.jpg
5. Select "Extra Sidebar Content" and add:
- HTML: `<div class="ad">Advertisement</div>`
6. Publish!
### 4. Render in Template
```django
{% load wagtailcore_tags %}
{% for block in page.body %}
{% include_block block %}
{% endfor %}
```
The layout HTML is rendered with your slot content injected at the correct positions.
### 5. Advanced: Nesting Layouts
You can nest layouts within slots:
**Outer Layout:** Page wrapper with header/footer slots
**Inner Layout:** Article layout with sidebar/main slots
```python
ReusableLayoutBlock: "Page Wrapper"
├─ slot: "header"
│ └─ ReusableBlockChooserBlock: "Site Header"
├─ slot: "content"
│ └─ ReusableLayoutBlock: "Two Column Layout" # Nested!
│ ├─ slot: "sidebar-extra"
│ │ └─ HTML: "<div>Ads</div>"
│ └─ slot: "main"
│ └─ RichTextBlock: "Article content..."
└─ slot: "footer"
└─ ReusableBlockChooserBlock: "Site Footer"
```
## Configuration
All settings are optional. Configure via `WAGTAIL_REUSABLE_BLOCKS` in your Django settings:
```python
# settings.py
WAGTAIL_REUSABLE_BLOCKS = {
# v0.1.0 settings
'TEMPLATE': 'my_app/custom_template.html',
'REGISTER_DEFAULT_SNIPPET': True,
'MAX_NESTING_DEPTH': 5,
# v0.2.0 settings
'SLOT_ATTRIBUTE': 'data-slot',
'SLOT_LABEL_ATTRIBUTE': 'data-slot-label',
'RENDER_TIMEOUT': 5,
}
```
### Available Settings
| Setting | Default | Description | Version |
|---------|---------|-------------|---------|
| `TEMPLATE` | `'wagtail_reusable_blocks/reusable_block.html'` | Template used to render blocks | v0.1.0+ |
| `REGISTER_DEFAULT_SNIPPET` | `True` | Auto-register default ReusableBlock snippet | v0.1.0+ |
| `MAX_NESTING_DEPTH` | `5` | Maximum depth for nested reusable blocks | v0.1.0+ |
| `SLOT_ATTRIBUTE` | `'data-slot'` | HTML attribute for slot detection | v0.2.0+ |
| `SLOT_LABEL_ATTRIBUTE` | `'data-slot-label'` | Optional label attribute for slots | v0.2.0+ |
| `RENDER_TIMEOUT` | `5` | Maximum render time in seconds | v0.2.0+ |
| `API_PERMISSION_CLASSES` | `['rest_framework.permissions.IsAuthenticated']` | Permission classes for DRF CRUD ViewSet | v0.8.0+ |
| `API_AUTHENTICATION_CLASSES` | `None` | Authentication classes for DRF CRUD ViewSet (`None` uses DRF defaults) | v0.8.0+ |
| `API_FILTER_FIELDS` | `['slug', 'live']` | Fields available for filtering | v0.8.0+ |
| `API_SEARCH_FIELDS` | `['name', 'slug']` | Fields used for search queries | v0.8.0+ |
## Advanced Usage
### Custom Block Types
To add more block types (images, videos, etc.), create your own model:
```python
from wagtail.blocks import CharBlock, ImageChooserBlock
from wagtail.fields import StreamField
from wagtail.snippets.models import register_snippet
from wagtail_reusable_blocks.models import ReusableBlock
@register_snippet
class CustomReusableBlock(ReusableBlock):
content = StreamField([
('rich_text', RichTextBlock()),
('raw_html', RawHTMLBlock()),
('image', ImageChooserBlock()),
('heading', CharBlock()),
], use_json_field=True, blank=True)
class Meta(ReusableBlock.Meta):
verbose_name = "Custom Reusable Block"
```
Then disable the default snippet:
```python
# settings.py
WAGTAIL_REUSABLE_BLOCKS = {
'REGISTER_DEFAULT_SNIPPET': False,
}
```
### Nested Blocks
Reusable blocks can contain other reusable blocks:
1. Create a `ReusableBlock` with your content
2. Create another `ReusableBlock` that references the first one
3. Use the second block in your pages
**Note**: Circular references are automatically detected and prevented. If Block A references Block B, and you try to make Block B reference Block A, you'll get a validation error.
### Custom Templates
Override the default template by creating your own:
```html
{# templates/my_app/custom_block.html #}
<div class="reusable-block">
{{ block.content }}
</div>
```
Then configure it:
```python
WAGTAIL_REUSABLE_BLOCKS = {
'TEMPLATE': 'my_app/custom_block.html',
}
```
Or specify per-render:
```python
block.render(template='my_app/custom_block.html')
```
## Troubleshooting
### Circular Reference Error
**Error**: `Circular reference detected: Layout A → Layout B → Layout A`
**Cause**: You've created a circular reference where layouts reference each other in a loop.
**Solution**: Remove one of the references to break the cycle. The error message shows the exact reference chain.
Example fix:
```
Before (circular):
Layout A → slot → Layout B → slot → Layout A ❌
After (linear):
Layout A → slot → Layout B → slot → Layout C ✅
```
### Maximum Nesting Depth Exceeded
**Warning**: `Maximum nesting depth of 5 exceeded`
**Cause**: You've nested layouts deeper than the configured limit (default: 5 levels).
**Solution**:
1. **Reduce nesting depth** - Simplify your layout structure
2. **Increase limit** (not recommended beyond 10):
```python
# settings.py
WAGTAIL_REUSABLE_BLOCKS = {
'MAX_NESTING_DEPTH': 10, # Increase with caution
}
```
3. **Refactor** - Consider whether deep nesting is necessary
### Slots Not Appearing (v0.2.0+)
**Issue**: Selected a layout but no slot fields appear in the editor.
**Solutions**:
1. Ensure `wagtail_reusable_blocks` is in `INSTALLED_APPS` (slot detection URLs are registered automatically via the `register_admin_urls` hook — no manual URL include is needed)
2. Check browser console for JavaScript errors
3. Verify the layout has `data-slot` attributes in its HTML
4. Clear browser cache and reload (Cmd+Shift+R or Ctrl+Shift+R)
### Slot Content Not Rendering (v0.2.0+)
**Issue**: Filled a slot but content doesn't appear on the page.
**Solutions**:
1. Check that the `slot_id` matches the `data-slot` attribute exactly (case-sensitive)
2. Verify you're using `{% include_block block %}` in your template
3. Inspect the rendered HTML - the slot element should contain your content
4. Check browser developer tools for any JavaScript errors
### Slot Dropdown Shows Wrong Slots (v0.2.0+)
**Issue**: Slot dropdown shows slots from a different layout.
**Solutions**:
1. This is a caching issue - refresh the page
2. If persists, clear browser cache
3. Check browser console for API errors
4. Verify the slot detection endpoint is accessible at `/<wagtail-admin-prefix>/reusable-blocks/blocks/{id}/slots/` (the prefix matches your `WAGTAIL_ADMIN_URL_PATH` setting, defaulting to `admin`)
### Search Not Working
**Issue**: Created blocks don't appear in search
**Solution**: Run `python manage.py update_index` to rebuild the search index. New blocks are automatically indexed on save.
## Requirements
| Python | Django | Wagtail |
|--------|--------|---------|
| 3.10+ | 4.2, 5.1, 5.2 | 6.4, 7.0, 7.2 |
See our [CI configuration](.github/workflows/ci.yml) for the complete compatibility matrix.
## Documentation
- [Architecture & Design Decisions](docs/ARCHITECTURE.md)
- [Glossary of Terms](docs/GLOSSARY.md)
- [Revisions & Workflows](docs/REVISIONS.md) (v0.3.0+)
- [Performance Guide](docs/PERFORMANCE.md) (v0.3.0+)
- [REST API Guide](docs/API.md) (v0.8.0+)
- [Contributing Guide](CONTRIBUTING.md)
## Project Links
- [GitHub Repository](https://github.com/kkm-horikawa/wagtail-reusable-blocks)
- [Project Board](https://github.com/users/kkm-horikawa/projects/6)
- [Issue Tracker](https://github.com/kkm-horikawa/wagtail-reusable-blocks/issues)
## Contributing
We welcome contributions! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
## License
BSD 3-Clause License. See [LICENSE](LICENSE) for details.
## Inspiration
- [WordPress Gutenberg Synced Patterns](https://wordpress.org/documentation/article/reusable-blocks/)
- [Wagtail CRX Reusable Content](https://docs.coderedcorp.com/wagtail-crx/features/snippets/reusable_content.html)
- [React Slots and Composition](https://react.dev/learn/passing-props-to-a-component)
| text/markdown | kkm-horikawa | null | null | null | BSD-3-Clause | blocks, cms, components, django, reusable, wagtail | [
"Development Status :: 3 - Alpha",
"Environment :: Web Environment",
"Framework :: Django",
"Framework :: Django :: 4.2",
"Framework :: Django :: 5.1",
"Framework :: Django :: 5.2",
"Framework :: Wagtail",
"Framework :: Wagtail :: 6",
"Framework :: Wagtail :: 7",
"Intended Audience :: Developers",... | [] | null | null | >=3.10 | [] | [] | [] | [
"django>=4.2",
"wagtail>=6.0",
"djangorestframework>=3.14; extra == \"api\"",
"django-stubs>=5.1; extra == \"dev\"",
"mypy>=1.13; extra == \"dev\"",
"pre-commit>=4.0; extra == \"dev\"",
"pytest-benchmark>=4.0; extra == \"dev\"",
"pytest-cov>=4.1; extra == \"dev\"",
"pytest-django>=4.8; extra == \"de... | [] | [] | [] | [
"Homepage, https://github.com/kkm-horikawa/wagtail-reusable-blocks",
"Documentation, https://github.com/kkm-horikawa/wagtail-reusable-blocks#readme",
"Repository, https://github.com/kkm-horikawa/wagtail-reusable-blocks.git",
"Issues, https://github.com/kkm-horikawa/wagtail-reusable-blocks/issues",
"Changelo... | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:11:25.416269 | wagtail_reusable_blocks-0.8.2.tar.gz | 82,482 | de/9e/6e4c0c8e6461337b3c501cc1e9566bea0bf92b30b93e1ed7036086359b42/wagtail_reusable_blocks-0.8.2.tar.gz | source | sdist | null | false | 65d61fa36509e838356610fdf03859ab | 0fb2654506dc5bf3bebcd8e4d942bc9097617334eb0fe36b42f73766ea41a335 | de9e6e4c0c8e6461337b3c501cc1e9566bea0bf92b30b93e1ed7036086359b42 | null | [
"LICENSE"
] | 226 |
2.4 | transformez | 0.2.2 | A standalone utility for vertical elevation datum transformations. | # 🌍 Transformez ↕
**Global vertical datum transformations, simplified.**
*Transformez Les Données*
> 🚀 **v0.2.0:** Now supporting global tidal transformations via FES2014 & SEANOE.
**Transformez** is a standalone Python engine for converting geospatial data between vertical datums (e.g., `MLLW` ↔ `NAVD88` ↔ `Ellipsoid`).
---
## Installation
```bash
pip install transformez
```
*Requires [htdp](https://geodesy.noaa.gov/TOOLS/Htdp/Htdp.shtml) to be in your system PATH for frame transformations.*
## Usage
**Generate a vertical shift grid for anywhere on Earth.**
```bash
# Transform MLLW to WGS84 Ellipsoid in Norton Sound, AK
# (Where NOAA has no coverage!)
transformez -R -166/-164/63/64 -E 3s \
--input-datum mllw \
--output-datum 4979 \
--output shift_ak.tif
```
**Transform a raster directly.** Transformez reads the bounds/resolution from the file.
```bash
transformez --dem input_bathymetry.tif \
--input-datum "mllw" \
--output-datum "5703:geoid=geoid12b" \
--output output_navd88.tif
```
**Integrate directly into your download pipeline.**
```bash
# Download GEBCO and shift EGM96 to WGS84 on the fly
fetchez gebco ... --hook transformez:datum_in=5773,datum_out=4979
```
## Python API
```python
from transformez.transform import VerticalTransform
from fetchez.spatial import Region
# Define a region in India (Bay of Bengal)
region = Region(80, 85, 10, 15)
# Initialize Transformer
# Requesting "MLLW" in India triggers the Global Fallback automatically
vt = VerticalTransform(
region=region,
nx=1000, ny=1000,
epsg_in="mllw", # Will resolve to FES2014 LAT
epsg_out="epsg:4979" # WGS84 Ellipsoid
)
# Generate Shift
shift, unc = vt._vertical_transform(vt.epsg_in, vt.epsg_out)
```
## Supported Datums
* **Tidal**: mllw, mhhw, msl, lat
* **Ellipsoidal**: 4979 (WGS84), 6319 (NAD83 2011)
* **Orthometric**: 5703 (NAVD88), egm2008, egm96
* **Geoids**: g2018, g2012b, geoid09, xgeoid20b
## License
This project is licensed under the MIT License - see the [LICENSE](https://github.com/ciresdem/transformez/blob/main/LICENSE) file for details.
Copyright (c) 2010-2026 Regents of the University of Colorado | text/markdown | null | Matthew Love <matthew.love@colorado.edu>, Christopher Amante <christopher.amante@colorado.edu>, Elliot Lim <elliot.lim@colorado.edu>, Michael MacFerrin <michael.macferrin@colorado.edu> | null | Matthew Love <matthew.love@colorado.edu> | MIT License
Copyright (c) 2010-2026 Regents of the University of Colorado
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | Geospatial | [
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering :: GIS"
] | [] | null | null | >=3.8 | [] | [] | [] | [
"fetchez>0.3.3",
"numpy<2.0.0",
"pyproj",
"rasterio",
"scipy"
] | [] | [] | [] | [
"Homepage, https://github.com/continuous-dems/transformez",
"Issues, https://github.com/continuous-dems/transformez/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:09:55.850393 | transformez-0.2.2.tar.gz | 25,028 | de/22/de7c3258d55ab3c06d141c64fc2d72095e73f1dffd1f75a853ad2f74cb6a/transformez-0.2.2.tar.gz | source | sdist | null | false | 2ec4a449b1a68b9f081ddc99c1e57d4d | cf7f889d7a3ae00db340e3dbff18b6bd8e2a8ddfa073afb7f14c8f4bdaa2d2b9 | de22de7c3258d55ab3c06d141c64fc2d72095e73f1dffd1f75a853ad2f74cb6a | null | [
"AUTHORS.md",
"LICENSE"
] | 216 |
2.4 | projectdavid | 1.54.5 | Python SDK for interacting with the Entities Assistant API. | # Entity — by Project David
[](https://github.com/frankie336/entitites_sdk/actions/workflows/test_tag_release.yml)
[](https://polyformproject.org/licenses/noncommercial/1.0.0/)
The **Entity SDK** is a composable, Pythonic interface to the [Entities API](https://github.com/frankie336/entities_api) for building intelligent applications across **local, open-source**, and **cloud LLMs**.
It unifies:
- Users, threads, assistants, messages, runs, inference
- **Function calling**, **code interpretation**, and **structured streaming**
- Vector memory, file uploads, and secure tool orchestration
Local inference is fully supported via [Ollama](https://github.com/ollama).
---
## 🔌 Supported Inference Providers
| Provider | Type |
|--------------------------------------------------|--------------------------|
| [Ollama](https://github.com/ollama) | **Local** (Self-Hosted) |
| [DeepSeek](https://platform.deepseek.com/) | ☁ **Cloud** (Open-Source) |
| [Hyperbolic](https://hyperbolic.xyz/) | ☁ **Cloud** (Proprietary) |
| [OpenAI](https://platform.openai.com/) | ☁ **Cloud** (Proprietary) |
| [Together AI](https://www.together.ai/) | ☁ **Cloud** (Aggregated) |
| [Azure Foundry](https://azure.microsoft.com) | ☁ **Cloud** (Enterprise) |
---
## 📦 Installation
```bash
pip install projectdavid
```
---
# Quick Start
## Standard Synchronous Stream
**The standard synchronous streaming interface is considered legacy.
Implementations based on this interface will continue to function, but developers are strongly encouraged to use the event-driven interface instead.**
```python
"""
Standard Inference Test (No Tools)
---------------------------------------------------
1. Simple prompt -> response.
2. Handles Reasoning (DeepSeek) and Content events.
3. No tool execution or recursion logic.
"""
import os
from dotenv import load_dotenv
from projectdavid import Entity
load_dotenv()
# --------------------------------------------------
# Load the Entities client with your user API key
# Note: if you define ENTITIES_API_KEY="ea_6zZiZ..."
# in .env, you do not need to pass in the API key directly.
# We pass in here directly for clarity
# ---------------------------------------------------
client = Entity(base_url="http://localhost:9000", api_key=os.getenv("ENTITIES_API_KEY"))
user_id = os.getenv("ENTITIES_USER_ID")
# -----------------------------
# create an assistant
# ------------------------------
assistant = client.assistants.create_assistant(
name="test_assistant",
instructions="You are a helpful AI assistant",
)
print(f"created assistant with ID: {assistant.id}")
# -----------------------------------------------
# Create a thread
# Note:
# - Threads are re-usable
# Reuse threads in the case you want as continued
# multi turn conversation
# ------------------------------------------------
print("Creating thread...")
thread = client.threads.create_thread()
print(f"created thread with ID: {thread.id}")
# Store the dynamically created thread ID
actual_thread_id = thread.id
# -----------------------------------------
# Create a message using the NEW thread ID
# --------------------------------------------
print(f"Creating message in thread {actual_thread_id}...")
message = client.messages.create_message(
thread_id=actual_thread_id,
role="user",
content="Hello, assistant! Tell me about the latest trends in AI.",
assistant_id=assistant.id,
)
print(f"Created message with ID: {message.id}")
# ---------------------------------------------
# step 3 - Create a run using the NEW thread ID
# ----------------------------------------------
print(f"Creating run in thread {actual_thread_id}...")
run = client.runs.create_run(assistant_id=assistant.id, thread_id=actual_thread_id)
print(f"Created run with ID: {run.id}")
# ------------------------------------------------
# Instantiate the synchronous streaming helper
# --------------------------------------------------
sync_stream = client.synchronous_inference_stream
# ------------------------------------------------------
# step 4 - Set up the stream using the NEW thread ID
# --------------------------------------------------------
print(f"Setting up stream for thread {actual_thread_id}...")
sync_stream.setup(
user_id=user_id,
thread_id=actual_thread_id,
assistant_id=assistant.id,
message_id=message.id,
run_id=run.id,
api_key=os.getenv("HYPERBOLIC_API_KEY"),
)
print("Stream setup complete. Starting streaming...")
# --- Stream initial LLM response ---
try:
for chunk in sync_stream.stream_chunks(
provider="Hyperbolic",
model="hyperbolic/deepseek-ai/DeepSeek-V3-0324", # Ensure this model is valid/available
timeout_per_chunk=15.0,
):
content = chunk.get("content", "")
if content:
print(content, end="", flush=True)
print("\n--- End of Stream ---") # Add newline after stream
except Exception as e:
print(f"\n--- Stream Error: {e} ---") # Catch errors during streaming
print("Script finished.")
```
## Standard event-driven Stream
**The event-driven interface provides access to advanced Level 2 and Level 3 agentic capabilities.
Event and stream types can be handled in the back end before being rendered in the front-end application.**
```python
import os
from projectdavid import Entity
from projectdavid.events import ContentEvent, ReasoningEvent
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# 1. Initialize Client
client = Entity(base_url=os.getenv("BASE_URL"),
api_key=os.getenv("ENTITIES_API_KEY"))
# -----------------------------
# create an assistant
# ------------------------------
assistant = client.assistants.create_assistant(
name="test_assistant",
instructions="You are a helpful AI assistant",
)
print(f"created assistant with ID: {assistant.id}")
MODEL_ID = "hyperbolic/deepseek-ai/DeepSeek-V3"
PROVIDER = "Hyperbolic"
# 2. Create Conversation Context
# We create a Thread, add a Message, and create a Run.
thread = client.threads.create_thread()
print("-> Sending Prompt...")
message = client.messages.create_message(
thread_id=thread.id,
role="user",
content="Explain the difference between TCP and UDP in one paragraph.",
assistant_id=assistant.id,
)
run = client.runs.create_run(assistant_id=ASSISTANT_ID, thread_id=thread.id)
# 4. Initialize the Stream
stream = client.synchronous_inference_stream
stream.setup(
user_id=ENTITIES_USER_ID,
thread_id=thread.id,
assistant_id=ASSISTANT_ID,
message_id=message.id,
run_id=run.id,
api_key=PROVIDER_API_KEY,
)
# 5. Event Loop
# The SDK handles the connection and parses the stream into events.
print(f"-> Streaming from {MODEL_ID}...\n")
current_mode = None
try:
for event in stream.stream_events(provider=PROVIDER, model=MODEL_ID):
# A. Handle Reasoning (Chain-of-Thought)
# Models like DeepSeek or o1 emit thoughts before the answer.
if isinstance(event, ReasoningEvent):
if current_mode != "reasoning":
print("\n[🤔 THOUGHTS]: ", end="")
current_mode = "reasoning"
print(event.content, end="", flush=True)
# B. Handle Standard Content
# This is the final answer intended for the user.
elif isinstance(event, ContentEvent):
if current_mode != "content":
print("\n\n[🤖 ANSWER]: ", end="")
current_mode = "content"
print(event.content, end="", flush=True)
except Exception as e:
print(f"\n[!] Error: {e}")
print("\n\nDone.")
```
### Model Routes
The script above maps each model to a route suffix that you use when calling the API.
For example, to invoke the DeepSeek V3 model hosted on Hyperbolic you would use the suffix:
`hyperbolic/deepseek-ai/DeepSeek-V3-0324`
Below is a table that lists the route suffix for every supported model.
Below is a table that lists the route suffix for every supported model.
[View Model Routes Table](./docs/model_routes.md)
**The assisants response**:
Hello! The field of AI is evolving rapidly, and here are some of the latest trends as of early 2025:
### 1. **Multimodal AI Models**
- Models like GPT-4, Gemini, and others now seamlessly process text, images, audio, and video in a unified way, enabling richer interactions (e.g., ChatGPT with vision).
- Applications include real-time translation with context, AI-generated video synthesis, and more immersive virtual assistants.
### 2. **Smaller, More Efficient Models**
- While giant models (e.g., GPT-4, Claude 3) still dominate, there’s a push for smaller, specialized models (e.g., Microsoft’s Phi-3, Mistral 7B) that run locally on devices with near-LLM performance.
- Focus on **energy efficiency** and reduced computational costs.
### 3. **AI Agents & Autonomous Systems**
- AI “agents” (e.g., OpenAI’s “Agentic workflows”) can now perform multi-step tasks autonomously, like coding, research, or booking trips.
- Companies are integrating agentic AI into workflows (e.g., Salesforce, Notion AI).
### 4. **Generative AI Advancements**
- **Video generation**: Tools like OpenAI’s Sora, Runway ML, and Pika Labs produce high-quality, longer AI-generated videos.
- **3D asset creation**: AI can now generate 3D models from text prompts (e.g., Nvidia’s tools).
- **Voice cloning**: Ultra-realistic voice synthesis (e.g., ElevenLabs) is raising ethical debates.
### 5. **Regulation & Ethical AI**
- Governments are catching up with laws like the EU AI Act and U.S. executive orders on AI safety.
- Watermarking AI content (e.g., C2PA standards) is gaining traction to combat deepfakes.
### 6. **AI in Science & Healthcare**
- AlphaFold 3 (DeepMind) predicts protein interactions with unprecedented accuracy.
- AI-driven drug discovery (e.g., Insilico Medicine) is accelerating clinical trials.
### 7. **Open-Source vs. Closed AI**
- Tension between open-source (Mistral, Meta’s Llama 3) and proprietary models (GPT-4, Gemini) continues, with debates over safety and innovation.
### 8. **AI Hardware Innovations**
- New chips (e.g., Nvidia’s Blackwell, Groq’s LPUs) are optimizing speed and cost for AI workloads.
- “AI PCs” with NPUs (neural processing units) are becoming mainstream.
### 9. **Personalized AI**
- Tailored AI assistants learn individual preferences (e.g., Rabbit R1, Humane AI Pin).
- Privacy-focused local AI (e.g., Apple’s on-device AI in iOS 18).
### 10. **Quantum AI (Early Stages)**
- Companies like Google and IBM are exploring quantum machine learning, though practical applications remain limited.
Would you like a deeper dive into any of these trends?
---
## Documentation
| Domain | Link |
|---------------------|--------------------------------------------------------|
| Assistants | [assistants.md](/docs/assistants.md) |
| Threads | [threads.md](/docs/threads.md) |
| Messages | [messages.md](/docs/messages.md) |
| Runs | [runs.md](/docs/runs.md) |
| Inference | [inference.md](/docs/inference.md) |
| Streaming | [streams.md](/docs/streams.md) |
| Tools | [function_calls.md](/docs/function_calls.md) |
| Code Interpretation | [code_interpretation.md](/docs/code_interpreter.md) |
| Files | [files.md](/docs/files.md) |
| Vector Store(RAG) | [vector_store.md](/docs/vector_store.md) |
| Versioning | [versioning.md](/docs/versioning.md) |
---
## ✅ Compatibility & Requirements
- Python **3.10+**
- Compatible with **local** or **cloud** deployments of the Entities API
---
## Related Repositories
- [Entities API](https://github.com/frankie336/entities_api) — containerized API backend
-
- [entities_common](https://github.com/frankie336/entities_common) — shared validation, schemas, utilities, and tools.
This package is auto installed as dependency of entities SDK or entities API.
| text/markdown | null | Francis Neequaye Armah <francis.neequaye@projectdavid.co.uk> | null | null | PolyForm Noncommercial License 1.0.0 | AI, SDK, Entities, LLM, Assistant | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"
] | [] | null | null | >=3.9 | [] | [] | [] | [
"httpx<0.29,>=0.25.2",
"pydantic<3.0,>=2.0",
"python-dotenv<2.0,>=1.0.1",
"aiofiles<25.0,>=23.2.1",
"projectdavid_common==0.27.0",
"qdrant-client<2.0.0,>=1.0.0",
"pdfplumber<0.12.0,>=0.11.0",
"validators<0.35.0,>=0.29.0",
"sentence-transformers<5.0,>=3.4.0",
"sseclient-py",
"requests",
"python... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:09:43.709901 | projectdavid-1.54.5.tar.gz | 115,869 | bb/c5/b9f506246b13c2f0a1283158ba91381e914f169329055a9612cc639d1031/projectdavid-1.54.5.tar.gz | source | sdist | null | false | 4a37b164957060c58267f7a3995712b3 | 88420961f0a8fcf4efb4dec0ac9907284959e9793d1b6d1723c0f0631eb30dbc | bbc5b9f506246b13c2f0a1283158ba91381e914f169329055a9612cc639d1031 | null | [
"LICENSE"
] | 222 |
2.4 | binxai-claudex | 1.0.0 | Set up Claude Code for any project in one command | # Claudex
> **Set up Claude Code for any project in one command**
Zero-dependency Python CLI that analyzes your project and generates a complete `.claude/` configuration with project-specific CLAUDE.md, hooks, commands, and development workflows.
[](https://github.com/Binx808/claudex/actions/workflows/test.yml)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
---
## Features
✨ **Smart Detection** - Analyzes your project (pyproject.toml, package.json, directory structure) to detect:
- Language (Python, TypeScript, JavaScript)
- Framework (FastAPI, Django, Flask, Next.js, React, Vue)
- Package manager (uv, poetry, pip, npm, pnpm, yarn)
- Database (PostgreSQL, MySQL, MongoDB, SQLite)
- Infrastructure (Docker, CI, Git)
📝 **Generated CLAUDE.md** - Uses **actual project data** (not templates):
- Real directory tree from your project
- Quick start commands for your package manager
- Framework-specific testing strategies
- Layer rules tailored to your architecture
🎯 **Auto-Preset Selection** - Detects your stack and chooses the best preset:
- `python-fastapi` for FastAPI projects
- `python-django` for Django projects
- `nextjs` for Next.js projects
- `generic` fallback for everything else
🔒 **Security Built-In** - Path traversal protection, never commits secrets
---
## Installation
```bash
# Install from source (for now)
cd /path/to/claudex
pip install -e .
# Or build and install wheel
python -m build
pip install dist/claudex-1.0.0-*.whl
```
**Coming soon**: `pip install claudex` (PyPI)
**Windows Note**: If `pip install` completes but the `claudex` command isn't found, you have three options:
1. Use `python -m claudex` instead of `claudex` for all commands
2. Use the provided `claudex.bat` wrapper script
3. Add your Python Scripts directory to PATH
---
## Quick Start
```bash
# Initialize a new project
claudex init /path/to/project --yes
# See what would be detected first
claudex info /path/to/project
# Update an existing .claude/ setup (preserves user files)
claudex update /path/to/project
# Validate your .claude/ setup
claudex validate /path/to/project
# List available presets
claudex presets
```
---
## Commands
### `claudex init [DIR]`
Initialize .claude/ configuration for a project.
**Options**:
- `--preset <name>` - Override auto-detection (python-fastapi, python-django, nextjs, generic)
- `--yes` - Skip confirmation prompt
- `--force` - Overwrite existing .claude/
- `--global` - Also install ~/.claude/ global config
**Example**:
```bash
# Auto-detect and initialize
cd my-fastapi-project
claudex init . --yes
# Override preset
claudex init /path/to/django-app --preset python-django
# Full setup: global + project
claudex init . --yes --global
```
**What it creates**:
- `.claude/` directory with:
- `hooks/` - 6 Python hooks (pre/post-tool-use, session lifecycle)
- `commands/` - 17 slash commands (`/dev`, `/audit`, `/parallel`, etc.)
- `rules/` - Development guidelines (workflow, naming, testing)
- `session/` - Task persistence files
- `feedback/` - Violation tracking
- `knowledge/` - 100x patterns reference
- `CLAUDE.md` - Generated from your actual project structure
- `.mcp.json` - MCP server config template
- Updates `.gitignore` to include `.claude/`
---
### `claudex update [DIR]`
Update existing .claude/ files without regenerating CLAUDE.md.
**Preserves**:
- `session/CURRENT_TASK.md`
- `session/TASK_PROGRESS.md`
- `session/BACKGROUND_QUEUE.md`
- `session/PARALLEL_SESSIONS.md`
- `feedback/*` (violations, lessons, corrections)
- `knowledge/*` (user-added knowledge files)
- `docs/*`
**Example**:
```bash
claudex update .
```
---
### `claudex validate [DIR]`
Health check your .claude/ setup.
**Checks**:
- `.claude/` directory exists
- Required subdirectories present (hooks, commands, rules, session, feedback)
- Required files present (settings.json, all hook scripts)
- `CLAUDE.md` exists at project root
- `.gitignore` includes `.claude/`
- `.mcp.json` exists (warns if missing)
**Example**:
```bash
claudex validate /path/to/project
```
**Output**:
```
✓ PASS: .claude/ directory exists
✓ PASS: All required directories present
✗ FAIL: Missing .claude/hooks/session-start.py
✓ PASS: CLAUDE.md exists
✗ FAIL: .gitignore does not include .claude/
Validation failed. Run 'claudex init --force' to restore.
```
---
### `claudex info [DIR]`
Show detection results without making changes.
**Example**:
```bash
claudex info /path/to/fastapi-project
```
**Output**:
```
Project: my-fastapi-app
Language: python
Framework: FastAPI
Package manager: uv
Python version: >=3.11
Database: postgresql
Redis: yes
Docker: yes
CI: yes
Auto-selected preset: python-fastapi
Directory tree:
my-fastapi-app/
src/
api/
core/
db/
tests/
unit/
integration/
client/
```
---
### `claudex presets`
List all available presets with descriptions.
**Example**:
```bash
claudex presets
```
**Output**:
```
Available presets:
python-fastapi - Python + FastAPI + SQLAlchemy + PostgreSQL
python-django - Python + Django + DRF + PostgreSQL
nextjs - Next.js + TypeScript + Tailwind + TanStack Query
generic - Minimal setup for any project
```
---
## Presets
Each preset provides tailored configuration for specific stacks:
### `python-fastapi`
- **Stack**: Python + FastAPI + SQLAlchemy + PostgreSQL
- **Architecture**: Domain/Application/Agents/Infrastructure layers
- **Quick start**: `uv sync`, `docker-compose up -d`, `uvicorn app:app --reload`
- **Testing**: pytest with 95% domain coverage target
### `python-django`
- **Stack**: Python + Django + Django REST Framework + PostgreSQL
- **Architecture**: Django apps with clean separation
- **Quick start**: `poetry install`, `python manage.py migrate`, `python manage.py runserver`
- **Testing**: Django test framework
### `nextjs`
- **Stack**: Next.js + TypeScript + Tailwind CSS + TanStack Query
- **Architecture**: App router with components/hooks/lib
- **Quick start**: `pnpm install`, `pnpm dev`
- **Testing**: Vitest + React Testing Library
### `generic`
- **Stack**: Any language/framework
- **Architecture**: Minimal recommendations
- **Quick start**: Auto-detected or manual
- **Testing**: Standard coverage targets
---
## After Setup
### 1. Configure MCP (Optional)
Edit `.mcp.json` to connect Claude Code to GitHub:
```bash
# Get your GitHub token
gh auth token
# Paste into .mcp.json "args" for github server
```
### 2. Start Claude Code
```bash
cd /path/to/project
# Claude Code will auto-load .claude/ configuration
```
### 3. Begin Development
```bash
# Start your first task
/dev start "implement user authentication"
# Continue after break
/dev continue
# Complete when done
/dev complete
```
---
## 100x Developer Workflow
The scaffold implements the **100x Developer Framework** for maximum throughput:
### Background Agents (Night Queue)
Accumulate tasks during deep work, execute overnight:
```bash
/background-queue add "Add unit tests for user service"
/background-queue add "Fix lint warnings in api/ directory"
/background-queue add "Update docstrings in core modules"
/night-kick # Generates headless launch commands
# Copy-paste commands, agents run overnight
/background-queue review # Check results next morning
```
### Parallel Sessions
Split large features across multiple Claude Code sessions:
```bash
/parallel plan "Sprint 5: Add real-time features"
# Creates PARALLEL_SESSIONS.md with:
# - Session table (file ownership, no overlaps)
# - Merge order (dependency-aware)
# - Worktree creation commands
# Launch sessions in separate terminals
cd ../project-session-a && /dev start ...
cd ../project-session-b && /dev start ...
/parallel status # Check progress
/parallel merge-order # Get correct merge sequence
/parallel cleanup # Remove completed worktrees
```
### AI Code Review
Automatically runs on every PR:
```yaml
# .github/workflows/claude-code-review.yml (created by scaffold)
- Project-specific review rules
- Architecture compliance checks
- Tag @claude in any PR comment for on-demand review
```
---
## Slash Commands Reference
| Command | Purpose |
|---------|---------|
| `/dev start <task>` | Initialize new task with session files |
| `/dev continue` | Resume from session files after compaction |
| `/dev checkpoint` | Save progress to disk |
| `/dev validate` | Run all validations |
| `/dev complete` | Complete task with full QA |
| `/audit` | Code quality and security audit |
| `/run-tests` | Execute test suite with reporting |
| `/validate-architecture` | Check layer placement and dependencies |
| `/validate-consistency` | Cross-layer schema/type/enum check |
| `/background-queue` | Manage background agent task queue |
| `/night-kick` | Launch queued background agents |
| `/parallel` | Plan and manage parallel sessions |
| `/report-violation` | Log and track workflow violations |
| `/improve-workflow` | Analyze feedback and propose improvements |
| `/expert-*` | Consult specialized subagents |
---
## Customization
### Modify Detection Logic
Edit `claudex/detectors.py`:
- `PYTHON_FRAMEWORKS` - Add new Python frameworks
- `JS_FRAMEWORKS` - Add new JavaScript frameworks
- `DB_INDICATORS` - Add database detection patterns
### Add New Presets
Create `claudex/presets/your-preset.yaml`:
```yaml
name: your-stack
description: Your custom stack description
architecture_tree: |
project/
src/
tests/
layer_description: |
- **src/**: Application code
- **tests/**: Test suite
layer_rules:
- Domain layer: Pure business logic
- Application layer: Use cases
quick_start: |
npm install
npm run dev
```
### Modify Templates
Edit files in `claudex/templates/project/`:
- `hooks/` - Customize Python hooks
- `commands/` - Add new slash commands
- `rules/` - Modify development guidelines
---
## Development
### Running Tests
```bash
# Install dev dependencies
pip install -e .
pip install pytest
# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/test_detectors.py -v
# Run with coverage
pytest tests/ --cov=claudex --cov-report=html
```
### Running CI Locally
```bash
# Lint
ruff check claudex/ tests/
# Format check
ruff format claudex/ tests/ --check
# Auto-fix
ruff check claudex/ tests/ --fix
ruff format claudex/ tests/
```
---
## Troubleshooting
### Issue: Detection fails on Windows
**Symptom**: Unicode errors when printing directory tree
**Fix**: Unicode encoding is handled internally with fallback to ASCII. If you still see errors, set:
```bash
set PYTHONIOENCODING=utf-8
```
### Issue: Templates not found after pip install
**Symptom**: `FileNotFoundError: templates/project/`
**Fix**: Templates are inside the package. If using editable install (`pip install -e .`), ensure you're in the repo directory. For normal install, templates resolve via `__file__`.
### Issue: Can't detect my framework
**Symptom**: Auto-selects `generic` preset when it should detect FastAPI/Django/Next.js
**Fix**: Check your dependencies:
- **Python**: Must be in `pyproject.toml` `[project.dependencies]` or `[tool.poetry.dependencies]`
- **JavaScript**: Must be in `package.json` `dependencies` (not `devDependencies`)
### Issue: CLAUDE.md not project-specific
**Symptom**: Generated CLAUDE.md has generic architecture tree
**Fix**: Detection couldn't find source directories. Ensure:
- Python: `src/` or `app/` directory with `.py` files
- JavaScript: `src/` or `app/` directory with `.ts`/`.tsx`/`.js` files
---
## Requirements
- Python 3.11+ (for `tomllib` stdlib)
- No external dependencies (stdlib only)
- Git (for worktree support in parallel sessions)
---
## License
MIT License - see [LICENSE](LICENSE)
---
## Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass (`pytest tests/`)
5. Submit a pull request
---
## Roadmap
- [ ] Publish to PyPI
- [ ] Add more presets (Flask, Express, Vue, Svelte)
- [ ] Preset inheritance (`extends:` in YAML)
- [ ] Detection confidence scores
- [ ] `preview` command (show what would be generated)
- [ ] Monorepo support (detect sub-projects)
---
**Made for Claude Code** - Anthropic's official CLI for Claude
| text/markdown | null | Binx808 <smarttype@gmail.com> | null | Binx808 <smarttype@gmail.com> | MIT | claude, claude-code, ai, developer-tools, code-generation, project-setup, fastapi, django, nextjs | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build... | [] | null | null | >=3.11 | [] | [] | [] | [
"pytest>=8.0; extra == \"dev\"",
"pytest-cov>=5.0; extra == \"dev\"",
"ruff>=0.8; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/BinxAI/claudex",
"Repository, https://github.com/BinxAI/claudex",
"Issues, https://github.com/BinxAI/claudex/issues",
"Documentation, https://github.com/BinxAI/claudex#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:09:41.255717 | binxai_claudex-1.0.0.tar.gz | 71,929 | e1/d1/51d921bc8cf60b7c661ae5bfd3389f62842942d2a5e6430c76636cc173f7/binxai_claudex-1.0.0.tar.gz | source | sdist | null | false | b8ce23bd8560f36567e38dbadda3c9fe | 1c92742a8400f9b930f34922e585135a0ca2899b94b208e6e19eb8ab14bec8a4 | e1d151d921bc8cf60b7c661ae5bfd3389f62842942d2a5e6430c76636cc173f7 | null | [
"LICENSE"
] | 213 |
2.4 | steer-core | 0.1.45 | Modelling energy storage from cell to site - STEER OpenCell Design | # steer-core
Base utilities for the OpenCell platform: constants, mixins (Serializer, Validation, Plotter), decorators, and the DataManager REST API client.
## Install
```bash
pip install -e .
```
## Environment Variables
| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `OPENCELL_ENV` | No | `production` | `development` = local SQLite, no auth. `production` = REST API + Cognito auth. |
| `API_URL` | In production | — | Base URL of the deployed REST API (e.g. `https://59xitvvsf2.execute-api.us-east-2.amazonaws.com/production`) |
| `API_TIMEOUT` | No | `30` | HTTP request timeout in seconds |
## Development vs Production Mode
Controlled by the `OPENCELL_ENV` environment variable. The helper `is_development()` from `steer_core.Data` is the single source of truth — use it anywhere you need to branch on mode.
```python
from steer_core.Data import is_development
if is_development():
# local SQLite path
else:
# REST API path
```
### Development mode (`OPENCELL_ENV=development`)
- `SerializerMixin.from_database()` uses the local SQLite database via `steer_opencell_data.DataManager`
- No network calls, works fully offline
- Requires `steer-opencell-data` installed with `database.db`
- Use this when developing new cells locally before publishing via the CLI migration tool (`steer-opencell-data` CLI)
### Production mode (`OPENCELL_ENV=production` or unset)
- `SerializerMixin.from_database()` uses the REST API via `steer_core.Data.DataManager`
- Requires `API_URL` pointing to the deployed Lambda endpoint
- JWT token passed automatically for authenticated operations (`DataManager.set_token()`)
- Logs API calls and S3 downloads to the `steer_core.DataManager` logger
## DataManager REST Client
`steer_core.Data.DataManager` — drop-in replacement for the SQLite-based DataManager. Same interface, talks to the REST API + S3 instead.
### Key methods
| Method | What it does |
|--------|-------------|
| `get_data(table, condition="name='X'")` | Fetch item + download blob from S3 presigned URL |
| `get_data(table)` (no condition) | List items — metadata only, no blob |
| `get_unique_values(table, column)` | List unique values from API |
| `get_{type}_materials(most_recent)` | 9 material-specific convenience methods |
| `insert_data(table, df)` | Upload blob to S3 via presigned URL |
| `remove_data(table, condition)` | Soft-delete via API |
| `fork_cell(table, name, new_name)` | Fork cell (auth required) |
| `publish_cell(table, name, new_name)` | Publish cell (admin only) |
| `check_name_available(name)` | Check name uniqueness across all cell tables |
| `set_token(token)` | Set JWT for authenticated requests |
### Exceptions
| Exception | HTTP Status | When |
|-----------|-------------|------|
| `DataManagerError` | — | Base class / missing `API_URL` |
| `APIError` | 5xx | Server error |
| `AuthenticationError` | 401 | Missing or invalid token |
| `ForbiddenError` | 403 | Insufficient permissions |
| `NotFoundError` | 404 | Resource not found |
| `ConflictError` | 409 | Name already taken (fork/publish) |
### Logging
API calls and S3 downloads are logged to the `steer_core.DataManager` logger:
```
[steer_core.DataManager] [API] GET /materials/tape_materials/Kapton -> 200 (164 ms)
[steer_core.DataManager] [S3] Downloaded 0.2 KB in 499 ms
```
| text/markdown | null | Nicholas Siemons <nsiemons@stanford.edu> | null | Nicholas Siemons <nsiemons@stanford.edu> | MIT | energy, storage, battery, modeling, simulation | [
"Development Status :: 4 - Beta",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Topic :: Scientific/En... | [] | null | null | >=3.10 | [] | [] | [] | [
"pandas==2.3.3",
"numpy==2.2.6",
"datetime==5.5",
"plotly==6.2.0",
"scipy==1.15.3",
"msgpack==1.1.1",
"msgpack-numpy==0.4.8",
"requests>=2.31.0",
"nbformat==5.10.4",
"shapely==2.1.1",
"lz4==4.4.4",
"pytest>=7.0; extra == \"dev\"",
"pytest-cov; extra == \"dev\"",
"black; extra == \"dev\"",
... | [] | [] | [] | [
"Homepage, https://github.com/nicholas9182/steer-core/",
"Repository, https://github.com/nicholas9182/steer-core/",
"Issues, https://github.com/nicholas9182/steer-core/issues",
"Documentation, https://github.com/nicholas9182/steer-core/"
] | twine/6.1.0 CPython/3.11.4 | 2026-02-21T05:09:31.395078 | steer_core-0.1.45.tar.gz | 41,875 | 8a/e4/0413ea952f4718a03fd00b3206016c8483a28f0360cd62f0c6824cf952ca/steer_core-0.1.45.tar.gz | source | sdist | null | false | f70442ad5ea4fca20955e9736d9d98e2 | 7ff267a6ef77e96ed6bdcb0e1afa261b982840cb17e95a636c74dd5336537b1a | 8ae40413ea952f4718a03fd00b3206016c8483a28f0360cd62f0c6824cf952ca | null | [] | 239 |
2.1 | tomotopy | 0.14.0 | Tomoto, Topic Modeling Tool for Python | tomotopy
========
.. image:: https://badge.fury.io/py/tomotopy.svg
:target: https://pypi.python.org/pypi/tomotopy
.. image:: https://zenodo.org/badge/186155463.svg
:target: https://zenodo.org/badge/latestdoi/186155463
🌐
**English**,
`한국어`_.
.. _한국어: README.kr.rst
What is tomotopy?
------------------
`tomotopy` is a Python extension of `tomoto` (Topic Modeling Tool) which is a Gibbs-sampling based topic model library written in C++.
It utilizes a vectorization of modern CPUs for maximizing speed.
The current version of `tomoto` supports several major topic models including
* Latent Dirichlet Allocation (`tomotopy.LDAModel`)
* Labeled LDA (`tomotopy.LLDAModel`)
* Partially Labeled LDA (`tomotopy.PLDAModel`)
* Supervised LDA (`tomotopy.SLDAModel`)
* Dirichlet Multinomial Regression (`tomotopy.DMRModel`)
* Generalized Dirichlet Multinomial Regression (`tomotopy.GDMRModel`)
* Hierarchical Dirichlet Process (`tomotopy.HDPModel`)
* Hierarchical LDA (`tomotopy.HLDAModel`)
* Multi Grain LDA (`tomotopy.MGLDAModel`)
* Pachinko Allocation (`tomotopy.PAModel`)
* Hierarchical PA (`tomotopy.HPAModel`)
* Correlated Topic Model (`tomotopy.CTModel`)
* Dynamic Topic Model (`tomotopy.DTModel`)
* Pseudo-document based Topic Model (`tomotopy.PTModel`).
Please visit https://bab2min.github.io/tomotopy to see more information.
Getting Started
---------------
You can install tomotopy easily using pip. (https://pypi.org/project/tomotopy/)
::
$ pip install --upgrade pip
$ pip install tomotopy
The supported OS and Python versions are:
* Linux (x86-64) with Python >= 3.6
* macOS >= 10.13 with Python >= 3.6
* Windows 7 or later (x86, x86-64) with Python >= 3.6
* Other OS with Python >= 3.6: Compilation from source code required (with c++14 compatible compiler)
After installing, you can start tomotopy by just importing.
::
import tomotopy as tp
print(tp.isa) # prints 'avx512', 'avx2', 'sse2' or 'none'
Currently, tomotopy can exploits AVX512, AVX2 or SSE2 SIMD instruction set for maximizing performance.
When the package is imported, it will check available instruction sets and select the best option.
If `tp.isa` tells `none`, iterations of training may take a long time.
But, since most of modern Intel or AMD CPUs provide SIMD instruction set, the SIMD acceleration could show a big improvement.
Here is a sample code for simple LDA training of texts from 'sample.txt' file.
::
import tomotopy as tp
mdl = tp.LDAModel(k=20)
for line in open('sample.txt'):
mdl.add_doc(line.strip().split())
for i in range(0, 100, 10):
mdl.train(10)
print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))
for k in range(mdl.k):
print('Top 10 words of topic #{}'.format(k))
print(mdl.get_topic_words(k, top_n=10))
mdl.summary()
Performance of tomotopy
-----------------------
`tomotopy` uses Collapsed Gibbs-Sampling(CGS) to infer the distribution of topics and the distribution of words.
Generally CGS converges more slowly than Variational Bayes(VB) that `gensim's LdaModel`_ uses, but its iteration can be computed much faster.
In addition, `tomotopy` can take advantage of multicore CPUs with a SIMD instruction set, which can result in faster iterations.
.. _gensim's LdaModel: https://radimrehurek.com/gensim/models/ldamodel.html
Following chart shows the comparison of LDA model's running time between `tomotopy` and `gensim`.
The input data consists of 1000 random documents from English Wikipedia with 1,506,966 words (about 10.1 MB).
`tomotopy` trains 200 iterations and `gensim` trains 10 iterations.
.. image:: https://bab2min.github.io/tomotopy/images/tmt_i5.png
Performance in Intel i5-6600, x86-64 (4 cores)
.. image:: https://bab2min.github.io/tomotopy/images/tmt_xeon.png
Performance in Intel Xeon E5-2620 v4, x86-64 (8 cores, 16 threads)
Although `tomotopy` iterated 20 times more, the overall running time was 5~10 times faster than `gensim`. And it yields a stable result.
It is difficult to compare CGS and VB directly because they are totaly different techniques.
But from a practical point of view, we can compare the speed and the result between them.
The following chart shows the log-likelihood per word of two models' result.
.. image:: https://bab2min.github.io/tomotopy/images/LLComp.png
The SIMD instruction set has a great effect on performance. Following is a comparison between SIMD instruction sets.
.. image:: https://bab2min.github.io/tomotopy/images/SIMDComp.png
Fortunately, most of recent x86-64 CPUs provide AVX2 instruction set, so we can enjoy the performance of AVX2.
Model Save and Load
-------------------
`tomotopy` provides `save` and `load` method for each topic model class,
so you can save the model into the file whenever you want, and re-load it from the file.
::
import tomotopy as tp
mdl = tp.HDPModel()
for line in open('sample.txt'):
mdl.add_doc(line.strip().split())
for i in range(0, 100, 10):
mdl.train(10)
print('Iteration: {}\tLog-likelihood: {}'.format(i, mdl.ll_per_word))
# save into file
mdl.save('sample_hdp_model.bin')
# load from file
mdl = tp.HDPModel.load('sample_hdp_model.bin')
for k in range(mdl.k):
if not mdl.is_live_topic(k): continue
print('Top 10 words of topic #{}'.format(k))
print(mdl.get_topic_words(k, top_n=10))
# the saved model is HDP model,
# so when you load it by LDA model, it will raise an exception
mdl = tp.LDAModel.load('sample_hdp_model.bin')
When you load the model from a file, a model type in the file should match the class of methods.
See more at `tomotopy.LDAModel.save` and `tomotopy.LDAModel.load` methods.
Interactive Model Viewer
------------------------
You can see the result of modeling using the interactive viewer since v0.13.0.
See the _ for a demo video.
::
import tomotopy as tp
model = tp.LDAModel(...)
# ... some training codes ...
tp.viewer.open_viewer(model, host="localhost", port=9999)
# And open http://localhost:9999 in your web browser!
If you have a saved model file, you can also use the following command line.
::
python -m tomotopy.viewer a_trained_model.bin --host localhost --port 9999
See more at `tomotopy.viewer` module.
Documents in the Model and out of the Model
-------------------------------------------
We can use Topic Model for two major purposes.
The basic one is to discover topics from a set of documents as a result of trained model,
and the more advanced one is to infer topic distributions for unseen documents by using trained model.
We named the document in the former purpose (used for model training) as **document in the model**,
and the document in the later purpose (unseen document during training) as **document out of the model**.
In `tomotopy`, these two different kinds of document are generated differently.
A **document in the model** can be created by `tomotopy.LDAModel.add_doc` method.
`add_doc` can be called before `tomotopy.LDAModel.train` starts.
In other words, after `train` called, `add_doc` cannot add a document into the model because the set of document used for training has become fixed.
To acquire the instance of the created document, you should use `tomotopy.LDAModel.docs` like:
::
mdl = tp.LDAModel(k=20)
idx = mdl.add_doc(words)
if idx < 0: raise RuntimeError("Failed to add doc")
doc_inst = mdl.docs[idx]
# doc_inst is an instance of the added document
A **document out of the model** is generated by `tomotopy.LDAModel.make_doc` method. `make_doc` can be called only after `train` starts.
If you use `make_doc` before the set of document used for training has become fixed, you may get wrong results.
Since `make_doc` returns the instance directly, you can use its return value for other manipulations.
::
mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc) # doc_inst is an instance of the unseen document
Inference for Unseen Documents
------------------------------
If a new document is created by `tomotopy.LDAModel.make_doc`, its topic distribution can be inferred by the model.
Inference for unseen document should be performed using `tomotopy.LDAModel.infer` method.
::
mdl = tp.LDAModel(k=20)
# add_doc ...
mdl.train(100)
doc_inst = mdl.make_doc(unseen_doc)
topic_dist, ll = mdl.infer(doc_inst)
print("Topic Distribution for Unseen Docs: ", topic_dist)
print("Log-likelihood of inference: ", ll)
The `infer` method can infer only one instance of `tomotopy.Document` or a `list` of instances of `tomotopy.Document`.
See more at `tomotopy.LDAModel.infer`.
Corpus and transform
--------------------
Every topic model in `tomotopy` has its own internal document type.
A document can be created and added into suitable for each model through each model's `add_doc` method.
However, trying to add the same list of documents to different models becomes quite inconvenient,
because `add_doc` should be called for the same list of documents to each different model.
Thus, `tomotopy` provides `tomotopy.utils.Corpus` class that holds a list of documents.
`tomotopy.utils.Corpus` can be inserted into any model by passing as argument `corpus` to `__init__` or `add_corpus` method of each model.
So, inserting `tomotopy.utils.Corpus` just has the same effect to inserting documents the corpus holds.
Some topic models requires different data for its documents.
For example, `tomotopy.DMRModel` requires argument `metadata` in `str` type,
but `tomotopy.PLDAModel` requires argument `labels` in `List[str]` type.
Since `tomotopy.utils.Corpus` holds an independent set of documents rather than being tied to a specific topic model,
data types required by a topic model may be inconsistent when a corpus is added into that topic model.
In this case, miscellaneous data can be transformed to be fitted target topic model using argument `transform`.
See more details in the following code:
::
from tomotopy import DMRModel
from tomotopy.utils import Corpus
corpus = Corpus()
corpus.add_doc("a b c d e".split(), a_data=1)
corpus.add_doc("e f g h i".split(), a_data=2)
corpus.add_doc("i j k l m".split(), a_data=3)
model = DMRModel(k=10)
model.add_corpus(corpus)
# You lose `a_data` field in `corpus`,
# and `metadata` that `DMRModel` requires is filled with the default value, empty str.
assert model.docs[0].metadata == ''
assert model.docs[1].metadata == ''
assert model.docs[2].metadata == ''
def transform_a_data_to_metadata(misc: dict):
return {'metadata': str(misc['a_data'])}
# this function transforms `a_data` to `metadata`
model = DMRModel(k=10)
model.add_corpus(corpus, transform=transform_a_data_to_metadata)
# Now docs in `model` has non-default `metadata`, that generated from `a_data` field.
assert model.docs[0].metadata == '1'
assert model.docs[1].metadata == '2'
assert model.docs[2].metadata == '3'
Parallel Sampling Algorithms
----------------------------
Since version 0.5.0, `tomotopy` allows you to choose a parallelism algorithm.
The algorithm provided in versions prior to 0.4.2 is `COPY_MERGE`, which is provided for all topic models.
The new algorithm `PARTITION`, available since 0.5.0, makes training generally faster and more memory-efficient, but it is available at not all topic models.
The following chart shows the speed difference between the two algorithms based on the number of topics and the number of workers.
.. image:: https://bab2min.github.io/tomotopy/images/algo_comp.png
.. image:: https://bab2min.github.io/tomotopy/images/algo_comp2.png
Performance by Version
----------------------
Performance changes by version are shown in the following graph.
The time it takes to run the LDA model train with 1000 iteration was measured.
(Docs: 11314, Vocab: 60382, Words: 2364724, Intel Xeon Gold 5120 @2.2GHz)
.. image:: https://bab2min.github.io/tomotopy/images/lda-perf-t1.png
.. image:: https://bab2min.github.io/tomotopy/images/lda-perf-t4.png
.. image:: https://bab2min.github.io/tomotopy/images/lda-perf-t8.png
Pining Topics using Word Priors
-------------------------------
Since version 0.6.0, a new method `tomotopy.LDAModel.set_word_prior` has been added. It allows you to control word prior for each topic.
For example, we can set the weight of the word 'church' to 1.0 in topic 0, and the weight to 0.1 in the rest of the topics by following codes.
This means that the probability that the word 'church' is assigned to topic 0 is 10 times higher than the probability of being assigned to another topic.
Therefore, most of 'church' is assigned to topic 0, so topic 0 contains many words related to 'church'.
This allows to manipulate some topics to be placed at a specific topic number.
::
import tomotopy as tp
mdl = tp.LDAModel(k=20)
# add documents into `mdl`
# setting word prior
mdl.set_word_prior('church', [1.0 if k == 0 else 0.1 for k in range(20)])
See `word_prior_example` in `example.py` for more details.
Examples
--------
You can find an example python code of tomotopy at https://github.com/bab2min/tomotopy/blob/main/examples/ .
You can also get the data file used in the example code at https://drive.google.com/file/d/18OpNijd4iwPyYZ2O7pQoPyeTAKEXa71J/view .
License
---------
`tomotopy` is licensed under the terms of MIT License,
meaning you can use it for any reasonable purpose and remain in complete ownership of all the documentation you produce.
History
-------
* 0.13.0 (2024-08-05)
* New features
* Major features of Topic Model Viewer `tomotopy.viewer.open_viewer()` are ready now.
* `tomotopy.LDAModel.get_hash()` is added. You can get 128bit hash value of the model.
* Add an argument `ngram_list` to `tomotopy.utils.SimpleTokenizer`.
* Bug fixes
* Fixed inconsistent `spans` bug after `Corpus.concat_ngrams` is called.
* Optimized the bottleneck of `tomotopy.LDAModel.load()` and `tomotopy.LDAModel.save()` and improved its speed more than 10 times.
* 0.12.7 (2023-12-19)
* New features
* Added Topic Model Viewer `tomotopy.viewer.open_viewer()`
* Optimized the performance of `tomotopy.utils.Corpus.process()`
* Bug fixes
* `Document.span` now returns the ranges in character unit, not in byte unit.
* 0.12.6 (2023-12-11)
* New features
* Added some convenience features to `tomotopy.LDAModel.train` and `tomotopy.LDAModel.set_word_prior`.
* `LDAModel.train` now has new arguments `callback`, `callback_interval` and `show_progres` to monitor the training progress.
* `LDAModel.set_word_prior` now can accept `Dict[int, float]` type as its argument `prior`.
* 0.12.5 (2023-08-03)
* New features
* Added support for Linux ARM64 architecture.
* 0.12.4 (2023-01-22)
* New features
* Added support for macOS ARM64 architecture.
* Bug fixes
* Fixed an issue where `tomotopy.Document.get_sub_topic_dist()` raises a bad argument exception.
* Fixed an issue where exception raising sometimes causes crashes.
* 0.12.3 (2022-07-19)
* New features
* Now, inserting an empty document using `tomotopy.LDAModel.add_doc()` just ignores it instead of raising an exception. If the newly added argument `ignore_empty_words` is set to False, an exception is raised as before.
* `tomotopy.HDPModel.purge_dead_topics()` method is added to remove non-live topics from the model.
* Bug fixes
* Fixed an issue that prevents setting user defined values for nuSq in `tomotopy.SLDAModel` (by @jucendrero).
* Fixed an issue where `tomotopy.utils.Coherence` did not work for `tomotopy.DTModel`.
* Fixed an issue that often crashed when calling `make_dic()` before calling `train()`.
* Resolved the problem that the results of `tomotopy.DMRModel` and `tomotopy.GDMRModel` are different even when the seed is fixed.
* The parameter optimization process of `tomotopy.DMRModel` and `tomotopy.GDMRModel` has been improved.
* Fixed an issue that sometimes crashed when calling `tomotopy.PTModel.copy()`.
* 0.12.2 (2021-09-06)
* An issue where calling `convert_to_lda` of `tomotopy.HDPModel` with `min_cf > 0`, `min_df > 0` or `rm_top > 0` causes a crash has been fixed.
* A new argument `from_pseudo_doc` is added to `tomotopy.Document.get_topics` and `tomotopy.Document.get_topic_dist`.
This argument is only valid for documents of `PTModel`, it enables to control a source for computing topic distribution.
* A default value for argument `p` of `tomotopy.PTModel` has been changed. The new default value is `k * 10`.
* Using documents generated by `make_doc` without calling `infer` doesn't cause a crash anymore, but just print warning messages.
* An issue where the internal C++ code isn't compiled at clang c++17 environment has been fixed.
* 0.12.1 (2021-06-20)
* An issue where `tomotopy.LDAModel.set_word_prior()` causes a crash has been fixed.
* Now `tomotopy.LDAModel.perplexity` and `tomotopy.LDAModel.ll_per_word` return the accurate value when `TermWeight` is not `ONE`.
* `tomotopy.LDAModel.used_vocab_weighted_freq` was added, which returns term-weighted frequencies of words.
* Now `tomotopy.LDAModel.summary()` shows not only the entropy of words, but also the entropy of term-weighted words.
* 0.12.0 (2021-04-26)
* Now `tomotopy.DMRModel` and `tomotopy.GDMRModel` support multiple values of metadata (see https://github.com/bab2min/tomotopy/blob/main/examples/dmr_multi_label.py )
* The performance of `tomotopy.GDMRModel` was improved.
* A `copy()` method has been added for all topic models to do a deep copy.
* An issue was fixed where words that are excluded from training (by `min_cf`, `min_df`) have incorrect topic id. Now all excluded words have `-1` as topic id.
* Now all exceptions and warnings that generated by `tomotopy` follow standard Python types.
* Compiler requirements have been raised to C++14.
* 0.11.1 (2021-03-28)
* A critical bug of asymmetric alphas was fixed. Due to this bug, version 0.11.0 has been removed from releases.
* 0.11.0 (2021-03-26) (removed)
* A new topic model `tomotopy.PTModel` for short texts was added into the package.
* An issue was fixed where `tomotopy.HDPModel.infer` causes a segmentation fault sometimes.
* A mismatch of numpy API version was fixed.
* Now asymmetric document-topic priors are supported.
* Serializing topic models to `bytes` in memory is supported.
* An argument `normalize` was added to `get_topic_dist()`, `get_topic_word_dist()` and `get_sub_topic_dist()` for controlling normalization of results.
* Now `tomotopy.DMRModel.lambdas` and `tomotopy.DMRModel.alpha` give correct values.
* Categorical metadata supports for `tomotopy.GDMRModel` were added (see https://github.com/bab2min/tomotopy/blob/main/examples/gdmr_both_categorical_and_numerical.py ).
* Python3.5 support was dropped.
* 0.10.2 (2021-02-16)
* An issue was fixed where `tomotopy.CTModel.train` fails with large K.
* An issue was fixed where `tomotopy.utils.Corpus` loses their `uid` values.
* 0.10.1 (2021-02-14)
* An issue was fixed where `tomotopy.utils.Corpus.extract_ngrams` craches with empty input.
* An issue was fixed where `tomotopy.LDAModel.infer` raises exception with valid input.
* An issue was fixed where `tomotopy.HLDAModel.infer` generates wrong `tomotopy.Document.path`.
* Since a new parameter `freeze_topics` for `tomotopy.HLDAModel.train` was added, you can control whether to create a new topic or not when training.
* 0.10.0 (2020-12-19)
* The interface of `tomotopy.utils.Corpus` and of `tomotopy.LDAModel.docs` were unified. Now you can access the document in corpus with the same manner.
* `__getitem__` of `tomotopy.utils.Corpus` was improved. Not only indexing by int, but also by Iterable[int], slicing are supported. Also indexing by uid is supported.
* New methods `tomotopy.utils.Corpus.extract_ngrams` and `tomotopy.utils.Corpus.concat_ngrams` were added. They extracts n-gram collocations using PMI and concatenates them into a single words.
* A new method `tomotopy.LDAModel.add_corpus` was added, and `tomotopy.LDAModel.infer` can receive corpus as input.
* A new module `tomotopy.coherence` was added. It provides the way to calculate coherence of the model.
* A paramter `window_size` was added to `tomotopy.label.FoRelevance`.
* An issue was fixed where NaN often occurs when training `tomotopy.HDPModel`.
* Now Python3.9 is supported.
* A dependency to py-cpuinfo was removed and the initializing of the module was improved.
* 0.9.1 (2020-08-08)
* Memory leaks of version 0.9.0 was fixed.
* `tomotopy.CTModel.summary()` was fixed.
* 0.9.0 (2020-08-04)
* The `tomotopy.LDAModel.summary()` method, which prints human-readable summary of the model, has been added.
* The random number generator of package has been replaced with `EigenRand`_. It speeds up the random number generation and solves the result difference between platforms.
* Due to above, even if `seed` is the same, the model training result may be different from the version before 0.9.0.
* Fixed a training error in `tomotopy.HDPModel`.
* `tomotopy.DMRModel.alpha` now shows Dirichlet prior of per-document topic distribution by metadata.
* `tomotopy.DTModel.get_count_by_topics()` has been modified to return a 2-dimensional `ndarray`.
* `tomotopy.DTModel.alpha` has been modified to return the same value as `tomotopy.DTModel.get_alpha()`.
* Fixed an issue where the `metadata` value could not be obtained for the document of `tomotopy.GDMRModel`.
* `tomotopy.HLDAModel.alpha` now shows Dirichlet prior of per-document depth distribution.
* `tomotopy.LDAModel.global_step` has been added.
* `tomotopy.MGLDAModel.get_count_by_topics()` now returns the word count for both global and local topics.
* `tomotopy.PAModel.alpha`, `tomotopy.PAModel.subalpha`, and `tomotopy.PAModel.get_count_by_super_topic()` have been added.
.. _EigenRand: https://github.com/bab2min/EigenRand
* 0.8.2 (2020-07-14)
* New properties `tomotopy.DTModel.num_timepoints` and `tomotopy.DTModel.num_docs_by_timepoint` have been added.
* A bug which causes different results with the different platform even if `seeds` were the same was partially fixed.
As a result of this fix, now `tomotopy` in 32 bit yields different training results from earlier version.
* 0.8.1 (2020-06-08)
* A bug where `tomotopy.LDAModel.used_vocabs` returned an incorrect value was fixed.
* Now `tomotopy.CTModel.prior_cov` returns a covariance matrix with shape `[k, k]`.
* Now `tomotopy.CTModel.get_correlations` with empty arguments returns a correlation matrix with shape `[k, k]`.
* 0.8.0 (2020-06-06)
* Since NumPy was introduced in tomotopy, many methods and properties of tomotopy return not just `list`, but `numpy.ndarray` now.
* Tomotopy has a new dependency `NumPy >= 1.10.0`.
* A wrong estimation of `tomotopy.HDPModel.infer` was fixed.
* A new method about converting HDPModel to LDAModel was added.
* New properties including `tomotopy.LDAModel.used_vocabs`, `tomotopy.LDAModel.used_vocab_freq` and `tomotopy.LDAModel.used_vocab_df` were added into topic models.
* A new g-DMR topic model(`tomotopy.GDMRModel`) was added.
* An error at initializing `tomotopy.label.FoRelevance` in macOS was fixed.
* An error that occured when using `tomotopy.utils.Corpus` created without `raw` parameters was fixed.
* 0.7.1 (2020-05-08)
* `tomotopy.Document.path` was added for `tomotopy.HLDAModel`.
* A memory corruption bug in `tomotopy.label.PMIExtractor` was fixed.
* A compile error in gcc 7 was fixed.
* 0.7.0 (2020-04-18)
* `tomotopy.DTModel` was added into the package.
* A bug in `tomotopy.utils.Corpus.save` was fixed.
* A new method `tomotopy.Document.get_count_vector` was added into Document class.
* Now linux distributions use manylinux2010 and an additional optimization is applied.
* 0.6.2 (2020-03-28)
* A critical bug related to `save` and `load` was fixed. Version 0.6.0 and 0.6.1 have been removed from releases.
* 0.6.1 (2020-03-22) (removed)
* A bug related to module loading was fixed.
* 0.6.0 (2020-03-22) (removed)
* `tomotopy.utils.Corpus` class that manages multiple documents easily was added.
* `tomotopy.LDAModel.set_word_prior` method that controls word-topic priors of topic models was added.
* A new argument `min_df` that filters words based on document frequency was added into every topic model's __init__.
* `tomotopy.label`, the submodule about topic labeling was added. Currently, only `tomotopy.label.FoRelevance` is provided.
* 0.5.2 (2020-03-01)
* A segmentation fault problem was fixed in `tomotopy.LLDAModel.add_doc`.
* A bug was fixed that `infer` of `tomotopy.HDPModel` sometimes crashes the program.
* A crash issue was fixed of `tomotopy.LDAModel.infer` with ps=tomotopy.ParallelScheme.PARTITION, together=True.
* 0.5.1 (2020-01-11)
* A bug was fixed that `tomotopy.SLDAModel.make_doc` doesn't support missing values for `y`.
* Now `tomotopy.SLDAModel` fully supports missing values for response variables `y`. Documents with missing values (NaN) are included in modeling topic, but excluded from regression of response variables.
* 0.5.0 (2019-12-30)
* Now `tomotopy.PAModel.infer` returns both topic distribution nd sub-topic distribution.
* New methods get_sub_topics and get_sub_topic_dist were added into `tomotopy.Document`. (for PAModel)
* New parameter `parallel` was added for `tomotopy.LDAModel.train` and `tomotopy.LDAModel.infer` method. You can select parallelism algorithm by changing this parameter.
* `tomotopy.ParallelScheme.PARTITION`, a new algorithm, was added. It works efficiently when the number of workers is large, the number of topics or the size of vocabulary is big.
* A bug where `rm_top` didn't work at `min_cf` < 2 was fixed.
* 0.4.2 (2019-11-30)
* Wrong topic assignments of `tomotopy.LLDAModel` and `tomotopy.PLDAModel` were fixed.
* Readable __repr__ of `tomotopy.Document` and `tomotopy.Dictionary` was implemented.
* 0.4.1 (2019-11-27)
* A bug at init function of `tomotopy.PLDAModel` was fixed.
* 0.4.0 (2019-11-18)
* New models including `tomotopy.PLDAModel` and `tomotopy.HLDAModel` were added into the package.
* 0.3.1 (2019-11-05)
* An issue where `get_topic_dist()` returns incorrect value when `min_cf` or `rm_top` is set was fixed.
* The return value of `get_topic_dist()` of `tomotopy.MGLDAModel` document was fixed to include local topics.
* The estimation speed with `tw=ONE` was improved.
* 0.3.0 (2019-10-06)
* A new model, `tomotopy.LLDAModel` was added into the package.
* A crashing issue of `HDPModel` was fixed.
* Since hyperparameter estimation for `HDPModel` was implemented, the result of `HDPModel` may differ from previous versions.
If you want to turn off hyperparameter estimation of HDPModel, set `optim_interval` to zero.
* 0.2.0 (2019-08-18)
* New models including `tomotopy.CTModel` and `tomotopy.SLDAModel` were added into the package.
* A new parameter option `rm_top` was added for all topic models.
* The problems in `save` and `load` method for `PAModel` and `HPAModel` were fixed.
* An occassional crash in loading `HDPModel` was fixed.
* The problem that `ll_per_word` was calculated incorrectly when `min_cf` > 0 was fixed.
* 0.1.6 (2019-08-09)
* Compiling errors at clang with macOS environment were fixed.
* 0.1.4 (2019-08-05)
* The issue when `add_doc` receives an empty list as input was fixed.
* The issue that `tomotopy.PAModel.get_topic_words` doesn't extract the word distribution of subtopic was fixed.
* 0.1.3 (2019-05-19)
* The parameter `min_cf` and its stopword-removing function were added for all topic models.
* 0.1.0 (2019-05-12)
* First version of **tomotopy**
Bindings for Other Languages
------------------------------
* Ruby: https://github.com/ankane/tomoto
Bundled Libraries and Their License
------------------------------------
* Eigen:
This application uses the MPL2-licensed features of Eigen, a C++ template library for linear algebra.
A copy of the MPL2 license is available at https://www.mozilla.org/en-US/MPL/2.0/.
The source code of the Eigen library can be obtained at http://eigen.tuxfamily.org/.
* EigenRand: `MIT License
<licenses_bundled/EigenRand>`_
Citation
---------
::
@software{minchul_lee_2022_6868418,
author = {Minchul Lee},
title = {bab2min/tomotopy: 0.12.3},
month = jul,
year = 2022,
publisher = {Zenodo},
version = {v0.12.3},
doi = {10.5281/zenodo.6868418},
url = {https://doi.org/10.5281/zenodo.6868418}
}
| text/x-rst | null | bab2min <bab2min@gmail.com> | null | null | MIT | NLP, Topic Model, LDA, HDP, DMR | [
"Development Status :: 4 - Beta",
"Intended Audience :: Information Technology",
"Intended Audience :: Science/Research",
"Topic :: Software Development :: Libraries",
"Topic :: Text Processing :: Linguistic",
"Topic :: Scientific/Engineering :: Information Analysis",
"Programming Language :: Python :: ... | [] | null | null | >=3.9 | [] | [] | [] | [
"numpy"
] | [] | [] | [] | [
"Homepage, https://github.com/bab2min/tomotopy",
"Documentation, https://bab2min.github.io/tomotopy/",
"Repository, https://github.com/bab2min/tomotopy",
"Issues, https://github.com/bab2min/tomotopy/issues"
] | twine/6.2.0 CPython/3.12.3 | 2026-02-21T05:07:48.664845 | tomotopy-0.14.0.tar.gz | 1,106,479 | 77/7f/f12d5db011f1960c1e73bc0dba4559bb5570320cd8b8f3a56dc04241811d/tomotopy-0.14.0.tar.gz | source | sdist | null | false | 8b2d072e9fc22dfc8647c5b3dccfa05a | ce981f29ee91cff4cfc5290637bbc1c1725cc17b93ee384499818f8358d2baa4 | 777ff12d5db011f1960c1e73bc0dba4559bb5570320cd8b8f3a56dc04241811d | null | [] | 514 |
2.4 | geoprepare | 0.6.111 | A Python package to prepare (download, extract, process input data) for GEOCIF and related models | # geoprepare
[](https://pypi.python.org/pypi/geoprepare)
**A Python package to prepare (download, extract, process input data) for GEOCIF and related models**
- Free software: MIT license
- Documentation: https://ritviksahajpal.github.io/geoprepare
## Installation
> **Note:** The instructions below have only been tested on a Linux system
### Install Anaconda
We recommend that you use the conda package manager to install the `geoprepare` library and all its
dependencies. If you do not have it installed already, you can get it from the [Anaconda distribution](https://www.anaconda.com/download#downloads)
### Using the CDS API
If you intend to download AgERA5 data, you will need to install the CDS API.
You can do this by following the instructions [here](https://cds.climate.copernicus.eu/api-how-to)
### Create a new conda environment (optional but highly recommended)
`geoprepare` requires multiple Python GIS packages including `gdal` and `rasterio`. These packages are not always easy
to install. To make the process easier, you can optionally create a new environment using the
following commands, specify the python version you have on your machine (python >= 3.9 is recommended). we use the `pygis` library
to install multiple Python GIS packages including `gdal` and `rasterio`.
```bash
conda create --name <name_of_environment> python=3.x
conda activate <name_of_environment>
conda install -c conda-forge mamba
mamba install -c conda-forge gdal
mamba install -c conda-forge rasterio
mamba install -c conda-forge xarray
mamba install -c conda-forge rioxarray
mamba install -c conda-forge pyresample
mamba install -c conda-forge cdsapi
mamba install -c conda-forge pygis
pip install wget
pip install pyl4c
```
Install the octvi package to download MODIS data
```bash
pip install git+https://github.com/ritviksahajpal/octvi.git
```
Downloading from the NASA distributed archives (DAACs) requires a personal app key. Users must
configure the module using a new console script, `octviconfig`. After installation, run `octviconfig`
in your command prompt to prompt the input of your personal app key. Information on obtaining app keys
can be found [here](https://ladsweb.modaps.eosdis.nasa.gov/tools-and-services/data-download-scripts/#tokens)
### Using PyPi (default)
```bash
pip install --upgrade geoprepare
```
### Using Github repository (for development)
```bash
pip install --upgrade --no-deps --force-reinstall git+https://github.com/ritviksahajpal/geoprepare.git
```
### Local installation
Navigate to the directory containing `pyproject.toml` and run the following command:
```bash
pip install .
```
For development (editable install):
```bash
pip install -e ".[dev]"
```
## Pipeline
geoprepare follows a three-stage pipeline:
1. **Download** (`geodownload`) - Download and preprocess global EO datasets to `dir_download` and `dir_intermed`
2. **Extract** (`geoextract`) - Extract EO variable statistics per admin region to `dir_output`
3. **Merge** (`geomerge`) - Merge extracted EO files into per-country/crop CSV files for ML models and AgMet graphics
Additional utilities:
- **Check** (`geocheck`) - Validate that expected TIF files exist in `dir_intermed` after download
- **Diagnostics** (`diagnostics`) - Count and summarize files in the data directories
## Usage
```python
config_dir = "/path/to/config" # full path to your config directory
cfg_geoprepare = [f"{config_dir}/geobase.txt", f"{config_dir}/countries.txt", f"{config_dir}/crops.txt", f"{config_dir}/geoextract.txt"]
```
### 1. Download data (`geodownload`)
Downloads and preprocesses global EO datasets. Only requires `geobase.txt`. The `[DATASETS]` section controls which datasets are downloaded. Each dataset is processed to global 0.05° TIF files in `dir_intermed`.
```python
from geoprepare import geodownload
geodownload.run([f"{config_dir}/geobase.txt"])
```
### 2. Validate downloads (`geocheck`)
Checks that all expected TIF files exist in `dir_intermed` and are non-empty. Writes a timestamped report to `dir_logs/check/`.
```python
from geoprepare import geocheck
geocheck.run([f"{config_dir}/geobase.txt"])
```
### 3. Extract crop masks and EO data (`geoextract`)
Extracts EO variable statistics (mean, median, etc.) for each admin region, crop, and growing season.
```python
from geoprepare import geoextract
geoextract.run(cfg_geoprepare)
```
### 4. Merge extracted data (`geomerge`)
Merges per-region/year EO CSV files into a single CSV per country-crop-season combination.
```python
from geoprepare import geomerge
geomerge.run(cfg_geoprepare)
```
## Config files
| File | Purpose | Used by |
|------|---------|---------|
| [`geobase.txt`](#geobasetxt) | Paths, dataset settings, boundary file column mappings, logging | both |
| [`countries.txt`](#countriestxt) | Per-country config (boundary files, admin levels, seasons, crops) | both |
| [`crops.txt`](#cropstxt) | Crop masks, calendar categories (EWCM, AMIS), EO model variables | both |
| [`geoextract.txt`](#geoextracttxt) | Extraction-only settings (method, threshold, parallelism) | geoprepare |
| [`geocif.txt`](#geociftxt) | Indices/ML/agmet settings, country overrides, runtime selections | geocif |
**Order matters:** Config files are loaded left-to-right. When the same key appears in multiple files, the last file wins. The tool-specific file (`geoextract.txt` or `geocif.txt`) must be last so its `[DEFAULT]` values (countries, method, etc.) override the shared defaults in `countries.txt`.
```python
config_dir = "/path/to/config" # full path to your config directory
cfg_geoprepare = [f"{config_dir}/geobase.txt", f"{config_dir}/countries.txt", f"{config_dir}/crops.txt", f"{config_dir}/geoextract.txt"]
cfg_geocif = [f"{config_dir}/geobase.txt", f"{config_dir}/countries.txt", f"{config_dir}/crops.txt", f"{config_dir}/geocif.txt"]
```
## Config file documentation
### geobase.txt
Shared paths, dataset settings, boundary file column mappings, and logging. All directory paths are derived from `dir_base`.
```ini
[DATASETS]
datasets = ['CHIRPS', 'CPC', 'NDVI', 'ESI', 'NSIDC', 'AEF']
[PATHS]
dir_base = /gpfs/data1/cmongp1/GEO
dir_inputs = ${dir_base}/inputs
dir_logs = ${dir_base}/logs
dir_download = ${dir_inputs}/download
dir_intermed = ${dir_inputs}/intermed
dir_metadata = ${dir_inputs}/metadata
dir_condition = ${dir_inputs}/crop_condition
dir_crop_inputs = ${dir_condition}/crop_t20
dir_boundary_files = ${dir_metadata}/boundary_files
dir_crop_calendars = ${dir_metadata}/crop_calendars
dir_crop_masks = ${dir_metadata}/crop_masks
dir_images = ${dir_metadata}/images
dir_production_statistics = ${dir_metadata}/production_statistics
dir_output = ${dir_base}/outputs
; --- Per-dataset settings ---
[AEF]
; AlphaEarth Foundations satellite embeddings (2018-2024, 64 channels, 10m)
; Source: https://source.coop/tge-labs/aef | License: CC-BY 4.0
; Countries are read from geoextract.txt [DEFAULT] countries
buffer = 0.5
download_vrt = True
start_year = 2018
end_year = 2024
[AGERA5]
variables = ['Precipitation_Flux', 'Temperature_Air_2m_Max_24h', 'Temperature_Air_2m_Min_24h']
[CHIRPS]
fill_value = -2147483648
; CHIRPS version: 'v2' for CHIRPS-2.0 or 'v3' for CHIRPS-3.0
version = v3
; Disaggregation method for v3 only: 'sat' (IMERG) or 'rnl' (ERA5)
; - 'sat': Uses NASA IMERG Late V07 for daily downscaling (available from 1998, 0.1° resolution)
; - 'rnl': Uses ECMWF ERA5 for daily downscaling (full time coverage, 0.25° resolution)
; Note: Prelim data is only available with 'sat' due to ERA5 latency (5-6 days)
disagg = sat
[CHIRPS-GEFS]
fill_value = -2147483648
data_dir = /pub/org/chc/products/EWX/data/forecasts/CHIRPS-GEFS_precip_v12/15day/precip_mean/
[CPC]
data_dir = ftp://ftp.cdc.noaa.gov/Datasets
[ESI]
data_dir = https://gis1.servirglobal.net//data//esi//
list_products = ['4wk', '12wk']
[FLDAS]
use_spear = False
data_types = ['forecast']
variables = ['SoilMoist_tavg', 'TotalPrecip_tavg', 'Tair_tavg', 'Evap_tavg', 'TWS_tavg']
leads = [0, 1, 2, 3, 4, 5]
compute_anomalies = False
[NDVI]
product = MOD09CMG
vi = ndvi
scale_glam = False
scale_mark = True
[VIIRS]
product = VNP09CMG
vi = ndvi
scale_glam = False
scale_mark = True
[NSIDC]
[VHI]
data_historic = https://www.star.nesdis.noaa.gov/data/pub0018/VHPdata4users/VHP_4km_GeoTiff/
data_current = https://www.star.nesdis.noaa.gov/pub/corp/scsb/wguo/data/Blended_VH_4km/geo_TIFF/
; --- Boundary file column mappings ---
; Section name = filename stem (without extension)
; Maps source shapefile columns to standard internal names:
; adm0_col -> ADM0_NAME (country)
; adm1_col -> ADM1_NAME (admin level 1)
; adm2_col -> ADM2_NAME (admin level 2, optional)
; id_col -> ADM_ID (unique feature ID)
[adm_shapefile]
adm0_col = ADMIN0
adm1_col = ADMIN1
adm2_col = ADMIN2
id_col = FNID
[gaul1_asap_v04]
adm0_col = name0
adm1_col = name1
id_col = asap1_id
[EWCM_Level_1]
adm0_col = ADM0_NAME
adm1_col = ADM1_NAME
id_col = num_ID
; Add more [boundary_stem] sections as needed for other shapefiles
[LOGGING]
level = ERROR
[POOCH]
; URL to download metadata.zip (boundary files, crop masks, calendars, etc.)
; NOTE: Set this to your own hosted URL (e.g. Dropbox, S3, etc.)
url = <your_metadata_zip_url>
enabled = True
[DEFAULT]
logfile = log
parallel_process = False
fraction_cpus = 0.35
start_year = 2001
end_year = 2026
```
### countries.txt
Single source of truth for per-country config. Shared by both geoprepare and geocif.
```ini
[DEFAULT]
boundary_file = gaul1_asap_v04.shp
admin_level = admin_1
seasons = [1]
crops = ['maize']
category = AMIS
use_cropland_mask = False
calendar_file = crop_calendar.csv
mask = cropland_v9.tif
statistics_file = statistics.csv
zone_file = countries.csv
shp_region = GlobalCM_Regions_2025-11.shp
eo_model = ['aef', 'nsidc_surface', 'nsidc_rootzone', 'ndvi', 'cpc_tmax', 'cpc_tmin', 'chirps', 'chirps_gefs', 'esi_4wk']
annotate_regions = False
;;; AMIS countries (inherit from DEFAULT, override crops if needed) ;;;
[argentina]
crops = ['soybean', 'winter_wheat', 'maize']
[brazil]
crops = ['maize', 'soybean', 'winter_wheat', 'rice']
[india]
crops = ['rice', 'maize', 'winter_wheat', 'soybean']
[united_states_of_america]
crops = ['rice', 'maize', 'winter_wheat']
; ... (40+ AMIS countries, most inherit DEFAULT crops)
;;; EWCM countries (full per-country config) ;;;
[kenya]
category = EWCM
admin_level = admin_1
seasons = [1, 2]
use_cropland_mask = True
boundary_file = adm_shapefile.gpkg
calendar_file = EWCM_2025-04-21.xlsx
crops = ['maize']
[malawi]
category = EWCM
admin_level = admin_2
use_cropland_mask = True
boundary_file = adm_shapefile.gpkg
calendar_file = EWCM_2025-04-21.xlsx
crops = ['maize']
[ethiopia]
category = EWCM
admin_level = admin_2
use_cropland_mask = True
boundary_file = adm_shapefile.gpkg
calendar_file = EWCM_2025-04-21.xlsx
crops = ['maize', 'sorghum', 'millet', 'rice', 'winter_wheat', 'teff']
; ... (30+ EWCM countries, mostly Sub-Saharan Africa)
;;; Other countries (custom boundary files, non-standard setups) ;;;
[nepal]
crops = ['rice']
boundary_file = hermes_NPL_new_wgs_2.shp
[illinois]
admin_level = admin_3
boundary_file = illinois_counties.shp
```
### crops.txt
Crop mask filenames and calendar category definitions. Calendar categories define the EO variables and crop calendars used for each category of countries.
```ini
;;; Crop masks ;;;
[winter_wheat]
mask = Percent_Winter_Wheat.tif
[spring_wheat]
mask = Percent_Spring_Wheat.tif
[maize]
mask = Percent_Maize.tif
[soybean]
mask = Percent_Soybean.tif
[rice]
mask = Percent_Rice.tif
[teff]
mask = cropland_v9.tif
[sorghum]
mask = cropland_v9.tif
[millet]
mask = cropland_v9.tif
;;; Calendar categories ;;;
[EWCM]
use_cropland_mask = True
shp_boundary = adm_shapefile.gpkg
calendar_file = EWCM_2026-01-05.xlsx
crops = ['maize', 'sorghum', 'millet', 'rice', 'winter_wheat', 'teff']
growing_seasons = [1]
eo_model = ['aef', 'nsidc_surface', 'nsidc_rootzone', 'ndvi', 'cpc_tmax', 'cpc_tmin', 'chirps', 'chirps_gefs', 'esi_4wk']
[AMIS]
calendar_file = AMISCM_2026-01-05.xlsx
```
### geoextract.txt
Extraction-only settings for geoprepare. Loaded last so its `[DEFAULT]` overrides shared defaults.
```ini
[DEFAULT]
project_name = geocif
method = JRC
redo = False
threshold = True
floor = 20
ceil = 90
countries = ["malawi"]
forecast_seasons = [2022]
[PROJECT]
parallel_extract = True
parallel_merge = False
```
### geocif.txt
Indices, ML, and agmet settings for geocif. Country overrides go here when geocif needs different values than countries.txt (e.g., a subset of crops). Its `[DEFAULT]` section is loaded last and overrides shared defaults for geocif runs.
```ini
[AGMET]
eo_plot = ['ndvi', 'cpc_tmax', 'cpc_tmin', 'chirps', 'esi_4wk', 'nsidc_surface', 'nsidc_rootzone']
logo_harvest = harvest.png
logo_geoglam = geoglam.png
;;; Country overrides (only where geocif differs from countries.txt) ;;;
[ethiopia]
crops = ['winter_wheat']
[bangladesh]
crops = ['rice']
admin_level = admin_2
boundary_file = bangladesh.shp
[india]
crops = ['soybean', 'maize', 'rice']
[somalia]
crops = ['maize']
[ukraine]
crops = ['winter_wheat', 'maize']
;;; ML model definitions ;;;
[catboost]
ML_model = True
[linear]
ml_model = True
[analog]
ML_model = False
[median]
ML_model = False
; ... (additional models: gam, ngboost, tabpfn, desreg, cubist, etc.)
[ML]
model_type = REGRESSION
target = Yield (tn per ha)
feature_selection = BorutaPy
lag_years = 3
panel_model = True
panel_model_region = Country
median_years = 5
lag_yield_as_feature = True
run_latest_time_period = True
run_every_time_period = 3
cat_features = ["Harvest Year", "Region_ID", "Region"]
loocv_var = Harvest Year
[LOGGING]
log_level = INFO
[DEFAULT]
data_source = harvest
method = monthly_r
project_name = geocif
countries = ["kenya"]
crops = ['maize']
admin_level = admin_1
models = ['catboost']
seasons = [1]
threshold = True
floor = 20
input_file_path = ${PATHS:dir_crop_inputs}/processed
```
## Supported datasets
| Dataset | Description | Source |
|---------|-------------|--------|
| AEF | AlphaEarth Foundations satellite embeddings (64-band, 10m) | [source.coop](https://source.coop/tge-labs/aef) |
| AGERA5 | Agrometeorological indicators (precipitation, temperature) | [CDS](https://cds.climate.copernicus.eu) |
| CHIRPS | Rainfall estimates (v2 and v3) | [CHC](https://www.chc.ucsb.edu/data/chirps) |
| CHIRPS-GEFS | 15-day precipitation forecasts | CHC |
| CPC | Temperature (Tmax, Tmin) | NOAA CPC |
| ESI | Evaporative Stress Index (4-week, 12-week) | SERVIR |
| FLDAS | Land surface model outputs (soil moisture, precip, temp) | NASA |
| NDVI | Vegetation index from MODIS (MOD09CMG) | NASA |
| VIIRS | Vegetation index from VIIRS (VNP09CMG) | NASA |
| NSIDC | Soil moisture (surface, rootzone) | NSIDC |
| VHI | Vegetation Health Index | NOAA STAR |
| LST | Land Surface Temperature | NASA |
| AVHRR | Long-term NDVI | NOAA NCEI |
| FPAR | Fraction of Absorbed Photosynthetically Active Radiation | JRC |
| SOIL-MOISTURE | SMAP soil moisture | NASA |
## Upload package to PyPI
Navigate to the **root of the geoprepare repository** (the directory containing `pyproject.toml`):
```bash
cd /path/to/geoprepare
```
### Step 1: Update version
Use `bump2version` to update the version in both `pyproject.toml` and `geoprepare/__init__.py`:
**Using uv:**
```bash
uvx bump2version patch --current-version X.X.X --new-version X.X.Y pyproject.toml geoprepare/__init__.py
```
**Using pip:**
```bash
pip install bump2version
bump2version patch --current-version X.X.X --new-version X.X.Y pyproject.toml geoprepare/__init__.py
```
Or manually edit the version in `pyproject.toml` and `geoprepare/__init__.py`.
### Step 2: Clean old builds
**Linux/macOS:**
```bash
rm -rf dist/ build/ *.egg-info/
```
**Windows (Command Prompt):**
```cmd
rmdir /s /q dist build geoprepare.egg-info
```
**Windows (PowerShell):**
```powershell
Remove-Item -Recurse -Force dist/, build/, *.egg-info/ -ErrorAction SilentlyContinue
```
### Step 3: Build and upload
**Using uv (Linux/macOS):**
```bash
uv build
uvx twine check dist/*
uvx twine upload dist/geoprepare-X.X.X*
```
**Using uv (Windows):**
```cmd
uv build
uvx twine check dist\geoprepare-X.X.X.tar.gz dist\geoprepare-X.X.X-py3-none-any.whl
uvx twine upload dist\geoprepare-X.X.X.tar.gz dist\geoprepare-X.X.X-py3-none-any.whl
```
**Using pip:**
```bash
pip install build twine
python -m build
twine check dist/*
twine upload dist/geoprepare-X.X.X*
```
Replace `X.X.X` with your current version and `X.X.Y` with the new version.
### Optional: Configure PyPI credentials
To avoid entering credentials each time, create a `~/.pypirc` file (Linux/macOS) or `%USERPROFILE%\.pypirc` (Windows):
```ini
[pypi]
username = __token__
password = pypi-YOUR_API_TOKEN_HERE
```
## Credits
This project was supported by NASA Applied Sciences Grant No. 80NSSC17K0625 through the NASA Harvest Consortium, and the NASA Acres Consortium under NASA Grant #80NSSC23M0034.
| text/markdown | null | Ritvik Sahajpal <ritvik@umd.edu> | null | null | MIT | geoprepare, geospatial, agriculture, remote-sensing, earth-observation | [
"Development Status :: 2 - Pre-Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.... | [] | null | null | >=3.9 | [] | [] | [] | [
"rich",
"bump2version; extra == \"dev\"",
"wheel; extra == \"dev\"",
"watchdog; extra == \"dev\"",
"flake8; extra == \"dev\"",
"tox; extra == \"dev\"",
"coverage; extra == \"dev\"",
"Sphinx; extra == \"dev\"",
"twine; extra == \"dev\"",
"grip; extra == \"dev\"",
"pytest; extra == \"dev\"",
"py... | [] | [] | [] | [
"Homepage, https://github.com/ritviksahajpal/geoprepare",
"Documentation, https://ritviksahajpal.github.io/geoprepare",
"Repository, https://github.com/ritviksahajpal/geoprepare",
"Issues, https://github.com/ritviksahajpal/geoprepare/issues"
] | twine/6.2.0 CPython/3.11.13 | 2026-02-21T05:07:31.248706 | geoprepare-0.6.111.tar.gz | 14,777,210 | 82/7f/34738f535b3fcd4622b4b1c2e4928e791a1da7513d68f1344ff7586c3792/geoprepare-0.6.111.tar.gz | source | sdist | null | false | 3e11bcd9a1cad80669da64a00a728245 | 9cff4c1d2bc7ecae74f6c0d1aa8be452c4d399bdd445ecfc74de6c989152667a | 827f34738f535b3fcd4622b4b1c2e4928e791a1da7513d68f1344ff7586c3792 | null | [
"LICENSE"
] | 226 |
2.4 | open-games-spec | 0.1.0 | Typed DSL for Compositional Game Theory — define, verify, and report on open game patterns | # open-games-spec
Typed DSL for compositional game theory, built on [gds-framework](https://github.com/BlockScience/gds-framework).
## What is this?
`open-games-spec` extends the GDS framework with game-theoretic vocabulary — open games, strategic interactions, and compositional game patterns. It provides:
- **6 atomic game types** — Decision, CovariantFunction, ContravariantFunction, FeedbackGame, CorecursiveGame, IdentityGame
- **Pattern composition** — Sequential, Parallel, Feedback, and Corecursive composition operators
- **IR compilation** — Flatten game patterns into JSON-serializable intermediate representation
- **13 verification checks** — Type matching (T-001..T-006) and structural validation (S-001..S-007)
- **7 Markdown report templates** — System overview, verification summary, state machine, interface contracts, and more
- **6 Mermaid diagram generators** — Structural, hierarchy, flow topology, architecture views
- **CLI** — `ogs compile`, `ogs verify`, `ogs report`
## Architecture
```
gds-framework (pip install gds-framework)
│
│ Domain-neutral composition algebra, typed spaces,
│ state model, verification engine, flat IR compiler.
│
└── open-games-spec (pip install open-games-spec)
│
│ Game-theoretic DSL: OpenGame types, Pattern composition,
│ compile_to_ir(), domain verification, reports, visualization.
│
└── Your application
│
│ Concrete pattern definitions, analysis notebooks,
│ verification runners.
```
## Quick start
```bash
pip install open-games-spec
```
```python
from ogs.dsl.games import Decision, CovariantFunction
from ogs.dsl.composition import Flow
from ogs.dsl.pattern import Pattern
from ogs.dsl.compile import compile_to_ir
from ogs import verify
# Define games
sensor = CovariantFunction(name="Sensor", x="observation", y="signal")
agent = Decision(name="Agent", x="signal", y="action", r="reward", s="experience")
# Compose into a pattern
pattern = Pattern(
name="Simple Decision",
games=[sensor, agent],
flows=[Flow(source="Sensor", target="Agent", label="signal")],
)
# Compile and verify
ir_doc = compile_to_ir(pattern)
report = verify(ir_doc)
print(f"{report.checks_passed}/{report.checks_total} checks passed")
```
## License
Apache-2.0
## Credits & Attribution
### Development & Implementation
* **Primary Author:** [Rohan Mehta](mailto:rohan@block.science)
* **Organization:** [BlockScience](https://block.science/)
### Theoretical Foundation
This codebase is a direct implementation of the research and mathematical frameworks developed by:
* **Dr. Jamsheed Shorish** ([@jshorish](https://github.com/jshorish)) and **Dr. Michael Zargham** ([@mzargham](https://github.com/mzargham)).
* **Key Reference:** [Generalized Dynamical Systems, Part I: Foundations](https://blog.block.science/generalized-dynamical-systems-part-i-foundations-2/) (BlockScience, 2021).
### Architectural Inspiration
The design patterns and structural approach of this library are heavily influenced by the prior work of **Sean McOwen** ([@SeanMcOwen](https://github.com/SeanMcOwen)), specifically:
* [MSML](https://github.com/BlockScience/MSML): For system specification logic.
* [bdp-lib](https://github.com/BlockScience/bdp-lib): For block-data processing architecture.
### Contributors
* **Peter Hacker** ([@phacker3](https://github.com/phacker3)) — Code auditing and review (BlockScience).
### Intellectual Lineage
This project exists within the broader ecosystem of:
* [cadCAD](https://github.com/cadCAD-org/cadCAD): For foundational philosophy in Complex Adaptive Dynamics.
| text/markdown | null | Rohan Mehta <rohan@block.science> | null | null | null | categorical-cybernetics, compositional-game-theory, dsl, game-theory, mechanism-design, open-games, verification | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scie... | [] | null | null | >=3.12 | [] | [] | [] | [
"gds-framework>=0.1",
"jinja2>=3.1",
"pydantic>=2.10",
"typer>=0.15"
] | [] | [] | [] | [
"Homepage, https://github.com/BlockScience/open_games_dsl",
"Repository, https://github.com/BlockScience/open_games_dsl"
] | uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null} | 2026-02-21T05:07:19.154446 | open_games_spec-0.1.0.tar.gz | 60,256 | 75/7f/9247011d925412c9b43222f2aecf55a2d045885f5a748e220677c21185e0/open_games_spec-0.1.0.tar.gz | source | sdist | null | false | b315b36ed4f1ac1bcd1ed918734a41f4 | b5d588c2abdbdc5b9efe43832472ccd3f10140b8967d86e3e4f740606c628a02 | 757f9247011d925412c9b43222f2aecf55a2d045885f5a748e220677c21185e0 | Apache-2.0 | [
"LICENSE"
] | 232 |
2.4 | git-pulsar | 0.15.0 | Automated, paranoid git backups for students and casual coding. | # 🔭 Git Pulsar (v0.15.0)
[](https://github.com/jacksonfergusondev/git-pulsar/actions)
[](https://www.python.org/downloads/)
[](LICENSE)
[](https://github.com/astral-sh/ruff)
[](https://github.com/Textualize/rich)
**Fault-tolerant state capture for distributed development.**
> **Standard `git commit` conflates two distinct actions: *saving your work* (frequency: high, noise: high) and *publishing a feature* (frequency: low, signal: high).**
>
> **Git Pulsar decouples them. It is a background daemon that provides high-frequency, out-of-band state capture, ensuring your work is immutable and recoverable without polluting your project history.**
## 📡 The Mission: Decoupling Signal from Noise
In a typical workflow, developers are forced to make "WIP" commits just to switch machines or save their progress. This introduces **entropy** into the commit log, requiring complex interactive rebases to clean up later.
**Git Pulsar** treats the working directory state as a continuous stream of data. It captures this "noise" in a dedicated namespace (`refs/heads/wip/...`), keeping your primary branch purely focused on "signal" (logical units of work).
<picture>
<source media="(prefers-color-scheme: dark)" srcset="demo/demo_dark.gif">
<source media="(prefers-color-scheme: light)" srcset="demo/demo_light.gif">
<img alt="Pulsar demo"
src="demo/demo_light.gif"
width="700"
style="max-width:100%; height:auto;">
</picture>
---
## ⚙️ Engineering Philosophy: Non-Blocking Determinism
This system is designed to operate safely alongside standard Git commands without race conditions or index locking.
### 1. Out-of-Band Indexing (The "Shadow" Index)
Most autosave tools aggressively run `git add .`, which destroys the user's carefully staged partial commits.
- **The Invariant:** The user's `.git/index` must never be touched by the daemon.
- **The Implementation:** Pulsar sets the `GIT_INDEX_FILE` environment variable to a temporary location (`.git/pulsar_index`). It constructs the tree object using low-level plumbing commands (`git write-tree`), bypassing the porcelain entirely. This ensures **Zero-Interference** with your active workflow.
### 2. Distributed State Reconciliation (The "Zipper" Graph)
In a distributed environment (Laptop ↔ Desktop), state drift is inevitable.
- **The Mechanism:** Pulsar maintains a separate refspec for each machine ID.
- **The Topology:** When you run `git pulsar finalize`, the engine performs an **Octopus Merge**, traversing the DAG (Directed Acyclic Graph) of all machine streams and squashing them into a single, clean commit on `main`.
### 3. Fault Tolerance
- **The Problem:** Laptops die. SSH connections drop.
- **The Solution:** By decoupling commits from pushes, Pulsar can capture local state every few minutes while conserving battery by pushing to the remote at a lower frequency (e.g., hourly). This guarantees that the **Mean Time To Recovery (MTTR)** is minimized regardless of network availability or hardware failure.
---
## ⚡ Features
- **Decoupled Cycles:** Independent intervals for local commits and remote pushes. Save your battery while staying protected.
- **Smart Identity:** Automatically detects naming collisions with other devices on the remote, ensuring unique backup streams for every machine.
- **Roaming Radar:** The background daemon actively polls for topological drift, firing a cross-platform OS notification if another machine leapfrogs your local session so you can `sync` before conflicts arise.
- **Out-of-Band Indexing:** Backups are stored in a configured namespace (default: `refs/heads/wip/pulsar/...`). Your `git status`, `git branch`, and `git log` remain completely clean.
- **Distributed Sessions:** Hop between machines. Pulsar tracks sessions per device and lets you `sync` to pick up exactly where you left off.
- **State-Aware Diagnostics:** The `doctor` command correlates transient log events with active system health to prevent alert fatigue, proactively scans for pipeline blockers, and offers an interactive queue to safely auto-fix common issues.
- **Active Observability:** The `status` dashboard provides zero-latency power telemetry (e.g., Eco-Mode throttling) and immediately surfaces cached warnings for remote session drift and oversized files.
- **Zero-Interference:**
- Uses a temporary index so it never messes up your partial `git add`.
- Detects if you are rebasing or merging and waits for you to finish.
- Prevents accidental upload of large binaries (configurable threshold).
- **Cascading Config:** Settings are merged from global defaults, `~/.config/git-pulsar/config.toml`, and local `pulsar.toml` or `pyproject.toml` files.
---
## 📦 Installation
### macOS
Install via Homebrew. This automatically manages the background service.
```bash
brew tap jacksonfergusondev/tap
brew install git-pulsar
brew services start git-pulsar
```
### Linux / Generic
Install via `uv` (or `pipx`) and use the built-in service manager to register the systemd timer.
```bash
uv tool install git-pulsar
# This generates and enables a systemd user timer
git pulsar install-service --interval 300
```
---
## 🚀 The Pulsar Workflow
Pulsar is designed to feel like a native git command.
### 1. Initialize & Identify
Navigate to your project. The first time you run Pulsar, it will register the repo, **check for naming collisions**, and start the background protection loop.
```bash
cd ~/University/Astro401
git pulsar
```
*The daemon will now silently snapshot your work based on your configured intervals.*
### 2. Configure Your Intensity
Need high-frequency protection for a critical project? Set a preset or fine-tune the intervals in your project root.
#### pulsar.toml
```toml
[daemon]
preset = "paranoid" # 5min commits, 5min pushes
```
### 3. The "Session Handoff" (Sync)
You worked on your **Desktop** all night but forgot to push manually. You open your **Laptop** at class.
```bash
git pulsar sync
```
*Pulsar checks the remote, finds the newer session from `desktop`, and fast-forwards your working directory to match it.*
### 4. Restore a File
Mess up a script? Grab the version from your last shadow commit.
```bash
# Restore specific file from the latest shadow backup
git pulsar restore src/main.py
```
### 5. Finalize Your Work
When you are ready to submit or merge to `main`:
```bash
git pulsar finalize
```
*This performs an **Octopus Merge**. It pulls the backup history from your Laptop, Desktop, and Lab PC, squashes them all together, and stages the result on `main`.*
---
## 🧬 Environment Bootstrap (macOS)
Pulsar includes a one-click scaffolding tool to set up a modern, robust Python environment.
```bash
git pulsar --env
```
This bootstraps the current directory with:
- **uv:** Initializes a project with fast package management and Python 3.12+ pinning.
- **direnv:** Creates an .envrc for auto-activating virtual environments and hooking into the shell.
- **VS Code:** Generates a .vscode/settings.json pre-configured to exclude build artifacts and use the local venv.
---
## 🛠 Command Reference
### Backup Management
| Command | Description |
| :--- | :--- |
| `git pulsar` | **Default.** Registers the current repo and ensures the daemon is watching it. |
| `git pulsar now` | Force an immediate backup cycle (commit + push). |
| `git pulsar sync` | Pull the latest session from *any* machine to your current directory. |
| `git pulsar restore <file>` | Restore a specific file from the latest backup. |
| `git pulsar diff` | See what has changed since the last backup. |
| `git pulsar finalize` | Squash-merge all backup streams into `main`. |
### Repository Control
| Command | Description |
| :--- | :--- |
| `git pulsar status` | Show real-time daemon telemetry, active health blockers, and repository status. |
| `git pulsar config` | Open the global configuration file in your default editor. |
| `git pulsar list` | Show all watched repositories and their status. |
| `git pulsar pause` | Temporarily suspend backups for this repo. |
| `git pulsar resume` | Resume backups. |
| `git pulsar remove` | Stop tracking this repository entirely (keeps files). |
| `git pulsar ignore <glob>` | Add a pattern to `.gitignore` (and untrack it if needed). |
### Maintenance
| Command | Description |
| :--- | :--- |
| `git pulsar doctor` | Run state-aware diagnostics and interactively auto-fix issues (logs, repo health, drift detection, hook interference). |
| `git pulsar prune` | Delete old backup history (>30 days). Runs automatically weekly. |
| `git pulsar log` | View recent log history (last 1000 lines) and tail new entries. |
### Service
| Command | Description |
| :--- | :--- |
| `git pulsar install-service` | Register the background daemon (LaunchAgent/Systemd). |
| `git pulsar uninstall-service` | Remove the background daemon. |
---
## ⚙️ Configuration
Settings cascade from Global → Local. Local list options (like `ignore`) append to global ones.
### Options
| Section | Key | Default | Description |
| :--- | :--- | :--- | :--- |
| `daemon` | `preset` | `None` | Use `paranoid`, `aggressive`, `balanced`, or `lazy`. |
| `daemon` | `commit_interval` | `600` | Seconds between local state captures. |
| `daemon` | `push_interval` | `3600` | Seconds between remote pushes. |
| `limits` | `large_file_threshold` | `100MB` | Max file size before aborting a backup. |
### Example `~/.config/git-pulsar/config.toml`
```toml
[daemon]
preset = "balanced"
eco_mode_percent = 25 # Throttles pushes if battery is low
[files]
ignore = ["*.tmp", "node_modules/"]
```
---
## 🗺 Roadmap
### Phase 1: The "Co-Pilot" Update (High Interactivity)
*Focus: Turning the tool from a blind script into a helpful partner that negotiates with you.*
- [ ] **Smart Restore:** Replace hard failures on "dirty" files with a negotiation menu (Overwrite / View Diff / Cancel).
- [ ] **Pre-Flight Checklists:** Display a summary table of incoming changes (machines, timestamps, file counts) before running destructive commands like `finalize`.
- [x] **Active Doctor:** Upgrade `git pulsar doctor` to not just diagnose issues (like stopped daemons), but offer to auto-fix them interactively.
### Phase 2: "Deep Thought" (Context & Intelligence)
*Focus: Leveraging data to make the tool feel alive and aware of your workflow.*
- [ ] **Semantic Shadow Logs:** Replace generic "Shadow backup" messages with auto-generated summaries (e.g., `backup: modified daemon.py (+15 lines)`).
- [x] **Roaming Radar:** Proactively detect if a different machine has pushed newer work to the same branch and notify the user to `sync`.
- [ ] **Decaying Retention:** Implement "Grandfather-Father-Son" pruning (keep all hourly backups for 24h, then daily summaries) to balance safety with disk space.
### Phase 3: The "TUI" Experience (Visuals)
*Focus: Making the invisible backup history tangible and explorable.*
- [ ] **Time Machine UI:** A terminal-based visual browser for `git pulsar restore` that lets you scroll through file history and view side-by-side diffs.
- [ ] **Universal Bootstrap:** Expand `git pulsar --env` to support Linux (apt/dnf) environments alongside macOS.
### Future Horizons
- [ ] **End-to-End Encryption:** Optional GPG encryption for shadow commits.
- [ ] **Windows Support:** Native support for PowerShell and Task Scheduler.
---
## 🤝 Contributing
We welcome contributions! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for details on how to set up the development environment, run tests, and submit pull requests.
## 📄 License
MIT © [Jackson Ferguson](https://github.com/jacksonfergusondev)
| text/markdown | null | jackson.ferguson0@gmail.com | null | null | MIT License
Copyright (c) 2026 Jackson Ferguson
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | null | [
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.14"
] | [] | null | null | >=3.12 | [] | [] | [] | [
"rich>=14.2.0"
] | [] | [] | [] | [
"Repository, https://github.com/jacksonfergusondev/git-pulsar",
"Issues, https://github.com/jacksonfergusondev/git-pulsar/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:06:29.084664 | git_pulsar-0.15.0.tar.gz | 1,265,341 | 37/1d/2395fb64aa6477d15faa09bc09a9c970c47cad482b57ded27d52ef33d8e4/git_pulsar-0.15.0.tar.gz | source | sdist | null | false | 9c6c76f883720edf4530dc8435634367 | 9362228e902d8e2e5b66eb13df290afcc1ab9b3946ecbb537ee415e6a04a943f | 371d2395fb64aa6477d15faa09bc09a9c970c47cad482b57ded27d52ef33d8e4 | null | [
"LICENSE"
] | 216 |
2.4 | genlist-butler | 1.3.1 | Generate HTML catalogs from music notation files with git-based version tracking | # GenList Butler
[](https://github.com/TuesdayUkes/genlist-butler/actions/workflows/test.yml)
[](https://badge.fury.io/py/genlist-butler)
[](https://pypi.org/project/genlist-butler/)
A command-line tool for generating HTML music archives from ChordPro files, PDFs, and other music notation files. Originally created for the Tuesday Ukulele Group, this tool scans a directory tree of music files and generates a searchable, filterable HTML catalog.
## Features
- 📁 **Smart File Discovery**: Automatically finds ChordPro (.chopro, .cho), PDF, MuseScore, and other music files
- 🔍 **Version Control Integration**: Uses git timestamps to identify the newest version of duplicate files
- 🎯 **Filtering Options**: Hide older versions, mark easy songs, exclude specific files
- 📄 **PDF Generation**: Optional automatic PDF generation from ChordPro files
- 🌐 **Interactive HTML**: Generates searchable, filterable HTML catalogs with modern UI
- 🧠 **Metadata-Aware Search**: Parses `{title:}`, `{subtitle:}`, `{keywords:}` (and optional lyrics) so the catalog can be filtered beyond filenames
- 🎨 **Beautiful Styling**: Includes Tuesday Ukes' professional HTML template - no configuration needed!
- ⚡ **Fast**: Optimized git operations for quick catalog generation
## Requirements
- Python 3.9 or later
- Git (for version tracking features)
## Installation
Install using pipx (recommended):
```bash
pipx install genlist-butler
```
Or using pip:
```bash
pip install genlist-butler
```
## Usage
Basic usage:
```bash
genlist <music_folder> <output_file>
```
### Examples
Generate a catalog with default settings (newest versions only):
```bash
genlist ./music index.html
```
Show all file versions:
```bash
genlist ./music index.html --filter none
```
Hide only files marked with `.hide` extension:
```bash
genlist ./music index.html --filter hidden
```
Generate PDFs from ChordPro files before cataloging:
```bash
genlist ./music index.html --genPDF
```
### Options
- `musicFolder` - Path to the directory containing music files
- `outputFile` - Path where the HTML catalog will be written
- `--filter [none|hidden|timestamp]` - Filtering method (default: timestamp)
- `none`: Show all files
- `hidden`: Hide files with `.hide` extension
- `timestamp`: Show only newest versions based on git history
- `--intro / --no-intro` - Include/exclude introduction section (default: include)
- `--genPDF / --no-genPDF` - Generate PDFs from ChordPro files (default: no)
- `--forcePDF / --no-forcePDF` - Regenerate all PDFs even if they exist (default: no)
### File Markers
GenList Butler uses special marker files:
- **`.hide` files**: Create a file with `.hide` extension (e.g., `song.hide`) to hide all files with the same base name from the catalog
- **`.easy` files**: Create a file with `.easy` extension (e.g., `song.easy`) to mark all files with the same base name as "easy songs" for filtering
### Search Metadata & Lyrics
The search bar now understands more than filenames:
- `{title: ...}` / `{t: ...}` and `{subtitle: ...}` / `{st: ...}` directives are indexed automatically.
- `{keywords: folk; jam; singalong}` directives (or `# keywords:` inline comments) let you define search tags without changing filenames.
- Full lyric text from `.chopro` files is indexed as well, and users can disable lyric-matching with the **Include lyric search** checkbox if they want faster filtering.
Add metadata directly in your ChordPro charts:
```chordpro
{title: Wagon Wheel}
{subtitle: Old Crow Medicine Show}
{keywords: campfire; beginner; singalong}
[G]Heading down south to the [D]land of the pines...
```
Those keywords/subtitles become instantly searchable in the generated HTML.
### Custom HTML Styling
GenList-Butler includes a beautiful, professional HTML template out of the box (Tuesday Ukes' styling). However, you can customize it:
1. Create your own `HTMLheader.txt` file in your working directory
2. Run genlist from that directory
3. Your custom header will be used instead of the default
The generated HTML will use your custom styling while maintaining all the interactive search/filter functionality.
## Requirements
- Python 3.9+
- Git (for timestamp-based filtering)
- ChordPro (optional, for PDF generation)
## How It Works
1. **Scans** the music folder recursively for supported file types
2. **Groups** files by song title (normalized, ignoring articles)
3. **Filters** based on the selected method:
- Uses git history to find the newest version of each file
- Respects `.hide` marker files
- Processes `.easy` marker files for special highlighting
4. **Generates** an interactive HTML page with:
- Searchable song list
- Download links for all file formats
- Optional filtering for easy songs
- Toggle for showing all versions
## Development
To contribute or modify:
```bash
# Clone the repository
git clone https://github.com/TuesdayUkes/genlist-butler.git
cd genlist-butler
# Install in development mode
pip install -e ".[dev]"
# Run tests
pytest
```
## License
MIT License - see LICENSE file for details
## Credits
Created for the Tuesday Ukulele Group (https://tuesdayukes.org/)
Maintained by the TUG community.
| text/markdown | null | Tuesday Ukulele Group <tuesdayukes@gmail.com> | null | null | MIT | music, catalog, chordpro, html, git, ukulele | [
"Development Status :: 4 - Beta",
"Intended Audience :: End Users/Desktop",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Languag... | [] | null | null | >=3.9 | [] | [] | [] | [
"first>=2.0.0",
"pytest>=7.0; extra == \"dev\"",
"black>=23.0; extra == \"dev\"",
"flake8>=6.0; extra == \"dev\"",
"mypy>=1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/TuesdayUkes/genlist-butler",
"Repository, https://github.com/TuesdayUkes/genlist-butler",
"Bug Tracker, https://github.com/TuesdayUkes/genlist-butler/issues",
"Documentation, https://github.com/TuesdayUkes/genlist-butler#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:05:49.256469 | genlist_butler-1.3.1.tar.gz | 25,001 | ab/26/884abe89279eacf332d3e2b2cf4eef5a234abae308a6a65a47abf76af86b/genlist_butler-1.3.1.tar.gz | source | sdist | null | false | 81b60d2bcc002f626837eb3d4ee8bb93 | 28f30f47af284b646f7ca0981ecbeb4ccfd93a021b3df7a210fed291d2060d82 | ab26884abe89279eacf332d3e2b2cf4eef5a234abae308a6a65a47abf76af86b | null | [
"LICENSE"
] | 227 |
2.4 | models-dev | 1.0.137 | Typed Python interface to models.dev API data | # models-dev
Typed Python package for [models.dev](https://models.dev) data. Access 2000+ LLM models from 75+ providers with full typing support. No HTTP calls - data is bundled and auto-updated hourly.
See [GitHub](https://github.com/vklimontovich/models-dev) for documentation and examples.
| text/markdown | vklmn | null | null | null | null | ai, anthropic, llm, models, openai | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: P... | [] | null | null | >=3.10 | [] | [] | [] | [] | [] | [] | [] | [
"Homepage, https://github.com/vklimontovich/models-dev",
"Documentation, https://models.dev"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:05:31.802034 | models_dev-1.0.137.tar.gz | 122,927 | 11/d8/b5ba88165ba7ea320d416d10a0e0138b1be9d94555431bd92fc563ec3edf/models_dev-1.0.137.tar.gz | source | sdist | null | false | bd6640a75d39be9cf86027cbd071182e | 2ce1f4516f5c3af0babd3366a2596917c0d573bb8cf41c113924cd10b8cf678f | 11d8b5ba88165ba7ea320d416d10a0e0138b1be9d94555431bd92fc563ec3edf | MIT | [
"LICENSE"
] | 251 |
2.4 | iflow2api-sdk | 0.1.1 | iflow2api 的 Python SDK - 将 iFlow CLI 的 AI 服务暴露为 OpenAI 兼容 API | # iFlow2API SDK
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
iFlow2API 的 Python SDK,提供对 iFlow CLI AI 服务的 OpenAI 兼容 API 访问。
## 特性
- 🔐 **自动认证** - 自动处理 HMAC-SHA256 签名认证
- 🔄 **同步/异步支持** - 提供同步和异步两种客户端
- 📡 **流式响应** - 完整支持 SSE 流式响应处理
- 🛠️ **类型安全** - 使用 Pydantic 模型,完整的类型提示
- ⚡ **OpenAI 兼容** - 与 OpenAI SDK 类似的 API 设计
## 安装
```bash
pip install iflow2api-sdk
```
## 📖 [更多使用示例](EXAMPLES.md)
## 快速开始
### 同步客户端
```python
from iflow2api_sdk import IFlowClient
# 创建客户端
client = IFlowClient(
api_key="your-api-key",
base_url="https://apis.iflow.cn/v1" # 可选,默认为 https://apis.iflow.cn/v1
)
# 列出可用模型
models = client.models.list()
for model in models.data:
print(model.id)
# 创建 Chat Completion
response = client.chat.completions.create(
model="glm-5",
messages=[
{"role": "user", "content": "你好,请介绍一下自己"}
]
)
print(response.choices[0].message.content)
# 使用上下文管理器
with IFlowClient(api_key="your-api-key") as client:
response = client.chat.completions.create(
model="glm-5",
messages=[{"role": "user", "content": "Hello!"}]
)
```
### 异步客户端
```python
import asyncio
from iflow2api_sdk import AsyncIFlowClient
async def main():
async with AsyncIFlowClient(api_key="your-api-key") as client:
# 列出模型
models = await client.models.list()
# 创建 Chat Completion
response = await client.chat.completions.create(
model="glm-5",
messages=[{"role": "user", "content": "你好!"}]
)
print(response.choices[0].message.content)
asyncio.run(main())
```
### 流式响应
```python
from iflow2api_sdk import IFlowClient
client = IFlowClient(api_key="your-api-key")
# 流式 Chat Completion
stream = client.chat.completions.create(
model="glm-5",
messages=[{"role": "user", "content": "写一首诗"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
```
异步流式响应:
```python
import asyncio
from iflow2api_sdk import AsyncIFlowClient
async def main():
async with AsyncIFlowClient(api_key="your-api-key") as client:
stream = await client.chat.completions.create(
model="glm-5",
messages=[{"role": "user", "content": "写一首诗"}],
stream=True
)
async for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
asyncio.run(main())
```
## 支持的模型
### 文本模型
| 模型 ID | 描述 |
|---------|------|
| `glm-4.6` | 智谱 GLM-4.6 |
| `glm-4.7` | 智谱 GLM-4.7 |
| `glm-5` | 智谱 GLM-5(推荐) |
| `iFlow-ROME-30BA3B` | iFlow ROME 30B(快速) |
| `deepseek-v3.2-chat` | DeepSeek V3.2 对话模型 |
| `qwen3-coder-plus` | 通义千问 Qwen3 Coder Plus |
| `kimi-k2` | Moonshot Kimi K2 |
| `kimi-k2-thinking` | Moonshot Kimi K2 思考模型 |
| `kimi-k2.5` | Moonshot Kimi K2.5 |
| `kimi-k2-0905` | Moonshot Kimi K2 0905 |
| `minimax-m2.5` | MiniMax M2.5 |
### 视觉模型
| 模型 ID | 描述 |
|---------|------|
| `qwen-vl-max` | 通义千问 VL Max 视觉模型 |
> **注意**: 模型列表可能随 iFlow 服务更新而变化。建议使用 `client.models.list()` 获取最新的可用模型列表。
## API 参考
### IFlowClient
同步客户端。
**参数:**
- `api_key` (str): API 密钥
- `base_url` (str, 可选): API 基础 URL,默认为 `https://apis.iflow.cn/v1`
- `timeout` (float, 可选): 请求超时时间,默认为 300 秒
- `session_id` (str, 可选): 会话 ID,默认自动生成 UUID
### AsyncIFlowClient
异步客户端,参数与同步客户端相同。
### Chat Completions
```python
client.chat.completions.create(
model: str, # 模型 ID
messages: List[Dict], # 消息列表
stream: bool = False, # 是否流式响应
temperature: float = None, # 温度参数
max_tokens: int = None, # 最大 token 数
top_p: float = None, # Top-p 采样
tools: List[Dict] = None, # 工具定义
tool_choice: str = None, # 工具选择策略
)
```
### Models
```python
# 列出所有模型
models = client.models.list()
# 获取特定模型
model = client.models.retrieve("glm-5")
```
## 错误处理
SDK 提供以下异常类:
```python
from iflow2api_sdk import (
IFlowError, # SDK 基础异常类
APIError, # API 错误基类
AuthenticationError, # 认证错误
RateLimitError, # 速率限制错误
ModelNotFoundError, # 模型不存在错误
InvalidRequestError, # 无效请求错误
ValidationError, # 参数验证错误
)
try:
response = client.chat.completions.create(
model="invalid-model",
messages=[{"role": "user", "content": "Hello"}]
)
except ModelNotFoundError as e:
print(f"模型不存在: {e}")
except RateLimitError as e:
print(f"速率限制: {e}")
except ValidationError as e:
print(f"参数验证失败: {e}")
except APIError as e:
print(f"API 错误: {e}")
```
## 认证机制
SDK 自动处理 iFlow CLI 的认证签名:
1. 使用 `User-Agent: iFlow-Cli` 标识客户端
2. 生成 HMAC-SHA256 签名:`{user_agent}:{session_id}:{timestamp}`
3. 自动添加必要的请求头
## 开发
### 安装开发依赖
```bash
git clone https://github.com/your-repo/iflow2api-sdk.git
cd iflow2api-sdk
pip install -e ".[dev]"
```
### 运行测试
```bash
pytest tests/ -v
```
### 代码格式化
```bash
black iflow2api_sdk/
isort iflow2api_sdk/
```
## 许可证
MIT License
## 相关项目
- [iflow2api](../reference/iflow2api) - iFlow CLI API 代理服务
| text/markdown | null | iflow2api <1475429618@qq.com> | null | null | MIT | ai, api, chat, chatgpt, completions, iflow, llm, openai, sdk | [
"Development Status :: 4 - Beta",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pytho... | [] | null | null | >=3.10 | [] | [] | [] | [
"httpx>=0.28.0",
"pydantic>=2.0.0",
"mypy>=1.0.0; extra == \"dev\"",
"pytest-asyncio>=0.24.0; extra == \"dev\"",
"pytest-cov>=4.0.0; extra == \"dev\"",
"pytest>=8.0.0; extra == \"dev\"",
"ruff>=0.1.0; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/cacaview/iflow2api",
"Documentation, https://github.com/cacaview/iflow2api#readme",
"Repository, https://github.com/cacaview/iflow2api",
"Issues, https://github.com/cacaview/iflow2api/issues"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T05:05:24.899643 | iflow2api_sdk-0.1.1.tar.gz | 21,096 | 11/28/d2819c0df481ef54f0fdc4688e2f34e89757844399b084dcb73db45374da/iflow2api_sdk-0.1.1.tar.gz | source | sdist | null | false | 624fd5a4200382c2ce2c01430faba02a | 2d606f7ee2ec20dccfe4677fa39a836290db793362220930231ec8c416f22ec4 | 1128d2819c0df481ef54f0fdc4688e2f34e89757844399b084dcb73db45374da | null | [] | 218 |
2.4 | radboy | 0.0.960 | A Retail Calculator for Android/Linux | to use this application at its simplest
`pip install radboy==$VERSION`
create a Run.py file and paste the following into it and save
`from radboy import Run`
for lookups to be made possible please explore the in-prompt help text as a populated data file is not yet ready for use comsumption.# Radboy
# Radboy
| text/markdown | Carl Joseph Hirner III | Carl Hirner III <k.j.hirner.wisdom@gmail.com> | null | null | null | null | [
"Programming Language :: Python :: 3",
"Operating System :: OS Independent"
] | [] | null | null | >=3.6 | [] | [] | [] | [
"scipy",
"inputimeout",
"gdown",
"biip",
"sympy",
"scipy",
"plotext",
"haversine",
"holidays",
"odfpy",
"qrcode[pil]",
"chardet",
"nanoid",
"random-password-generator",
"cython",
"pint",
"pyupc-ean",
"openpyxl",
"plyer",
"colored",
"numpy",
"pandas",
"Pillow",
"python-b... | [] | [] | [] | [
"Homepage, https://google.com",
"Issues, https://google.com"
] | twine/6.2.0 CPython/3.13.7 | 2026-02-21T05:03:00.086114 | radboy-0.0.960.tar.gz | 5,686,546 | ef/8a/06ae77073b0ed1c38a15a7b50ad5aace1d3db074fa9fe4e91bf0530ecc97/radboy-0.0.960.tar.gz | source | sdist | null | false | 7b6d211bd345a5949732b410bf58c571 | 94dd6d48de8fa64e621cc586122cd4060717be786c1b8ee7a6883f0be432879e | ef8a06ae77073b0ed1c38a15a7b50ad5aace1d3db074fa9fe4e91bf0530ecc97 | MIT | [] | 211 |
2.4 | eip-mcp | 0.2.9 | MCP server for the Exploit Intelligence Platform — vulnerability and exploit intelligence for AI assistants | # Exploit Intel Platform MCP Server
Package/command: `eip-mcp`
<p align="center">
<img src="https://exploit-intel.com/static/brand/mark-cyan.svg" width="160" alt="Exploit Intel Platform (EIP)" />
</p>
An MCP (Model Context Protocol) server that gives AI assistants access to the [Exploit Intelligence Platform](https://exploit-intel.com) — 370K+ vulnerabilities and 105K+ exploits from NVD, CISA KEV, EPSS, ExploitDB, Metasploit, GitHub, and more.
Part of the same project family:
- [`eip-search`](https://codeberg.org/exploit-intel/eip-search) — terminal client
- [`eip-mcp`](https://codeberg.org/exploit-intel/eip-mcp) — MCP server for AI assistants
## Highlights
- Give AI assistants real-time vulnerability and exploit intelligence
- Query CVEs with rich filters and ranked exploit context
- Include AI exploit analysis, MITRE ATT&CK mapping, and trojan indicators
- Generate pentest findings directly from CVE data
- Every exploit includes a clickable source URL (GitHub, ExploitDB, Metasploit)
- Nuclei templates include description, impact, and remediation text
## What This Enables
With this MCP server, your AI assistant can:
- Search vulnerabilities with 15+ filters (severity, vendor, product, EPSS, KEV, Nuclei, year, date range)
- Search exploits by source, language, author, GitHub stars, or LLM classification
- Get full CVE intelligence briefs with ranked exploits and trojan warnings
- Find all exploits for a specific CVE, vendor, or product
- Resolve alternate IDs (EDB-XXXXX, GHSA-XXXXX) to their CVE
- Discover exact product names for any vendor (CPE product name lookup)
- Look up exploit authors and their work
- Browse CWE categories and vendor threat landscapes
- Audit a tech stack for exploitable vulnerabilities
- Generate pentest report findings from real CVE data (all sections present with N/A when data is absent)
- Retrieve exploit source code for analysis
- See MITRE ATT&CK techniques and deception indicators for trojans
## Tools (16)
| Tool | Description |
|---|---|
| `search_vulnerabilities` | Search CVEs with filters: severity, vendor, product, ecosystem, CWE, CVSS/EPSS thresholds, KEV, Nuclei, year, date range. Supports pagination. |
| `get_vulnerability` | Full intelligence brief for a CVE or EIP-ID with ranked exploits (AI analysis, MITRE techniques, source URLs), products, Nuclei templates (with description/impact/remediation), references |
| `search_exploits` | Search exploits by source, language, LLM classification, author, stars, CVE, vendor, product. Filter by AI analysis: attack_type, complexity, reliability, requires_auth. Paginated. |
| `get_exploit_code` | Retrieve exploit source code by platform ID (auto-selects main file) |
| `get_nuclei_templates` | Nuclei scanner templates with description, impact, remediation, and Shodan/FOFA/Google dork queries |
| `list_authors` | Top exploit researchers ranked by exploit count |
| `get_author` | Author profile with all their exploits and CVE context |
| `list_cwes` | CWE categories ranked by vulnerability count |
| `get_cwe` | CWE detail with description, exploit likelihood, parent hierarchy |
| `list_vendors` | Software vendors ranked by vulnerability count |
| `list_products` | Discover exact product names for a vendor (CPE name lookup with vuln counts) |
| `lookup_alt_id` | Resolve alternate IDs (EDB-XXXXX, GHSA-XXXXX) to their CVE |
| `audit_stack` | Audit a tech stack for critical/high severity CVEs with exploits, sorted by EPSS risk |
| `generate_finding` | Generate a Markdown pentest report finding — all sections present with N/A when data is absent |
| `get_platform_stats` | Platform-wide counts and data freshness |
| `check_health` | API health and ingestion source timestamps |
## Installation
### Requirements
- **Python 3.10 or newer** (check with `python3 --version` or `python --version`)
- **pip** (comes with Python on most systems)
- An MCP-compatible AI client (Cursor IDE, Claude Desktop, etc.)
### macOS
```bash
# Install Python 3 via Homebrew if needed
brew install python3
# Recommended: pipx (isolated install, eip-mcp command available globally)
brew install pipx
pipx install eip-mcp
# Alternative: virtual environment
python3 -m venv ~/.venvs/eip-mcp
source ~/.venvs/eip-mcp/bin/activate
pip install eip-mcp
```
### Kali Linux / Debian / Ubuntu
```bash
# Python 3 is pre-installed on Kali. Install pip if needed:
sudo apt update && sudo apt install -y python3-pip python3-venv
# Recommended: pipx (isolated install, eip-mcp command available globally)
sudo apt install -y pipx
pipx install eip-mcp
# Alternative: virtual environment
python3 -m venv ~/.venvs/eip-mcp
source ~/.venvs/eip-mcp/bin/activate
pip install eip-mcp
```
> **Kali users**: If you see `error: externally-managed-environment`, use `pipx` or a virtual environment. Kali 2024+ enforces PEP 668 which blocks global pip installs.
### Windows
```powershell
# Install Python 3 from https://python.org (check "Add to PATH" during install)
# Option 1: pipx
pip install pipx
pipx install eip-mcp
# Option 2: virtual environment
python -m venv %USERPROFILE%\.venvs\eip-mcp
%USERPROFILE%\.venvs\eip-mcp\Scripts\activate
pip install eip-mcp
```
### Arch Linux / Manjaro
```bash
sudo pacman -S python python-pip python-pipx
pipx install eip-mcp
```
### From Source (all platforms)
```bash
git clone git@codeberg.org:exploit-intel/eip-mcp.git
cd eip-mcp
python3 -m venv .venv
source .venv/bin/activate # Linux/macOS
# .venv\Scripts\activate # Windows
pip install -e .
```
## Connecting to Your AI Client
### Cursor IDE
Add to `.cursor/mcp.json` in your workspace (or globally at `~/.cursor/mcp.json`):
**If installed with pipx** (recommended):
```json
{
"mcpServers": {
"eip": {
"command": "eip-mcp",
"args": [],
"env": {}
}
}
}
```
**If installed in a virtual environment:**
```json
{
"mcpServers": {
"eip": {
"command": "/absolute/path/to/.venvs/eip-mcp/bin/eip-mcp",
"args": [],
"env": {}
}
}
}
```
> **Note**: When using a virtual environment, use the absolute path to the `eip-mcp` binary inside it. On macOS/Linux: `~/.venvs/eip-mcp/bin/eip-mcp`. On Windows: `%USERPROFILE%\.venvs\eip-mcp\Scripts\eip-mcp.exe`.
### Claude Desktop
**macOS** — add to `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"eip": {
"command": "eip-mcp",
"args": [],
"env": {}
}
}
}
```
**Windows** — add to `%APPDATA%\Claude\claude_desktop_config.json`:
```json
{
"mcpServers": {
"eip": {
"command": "eip-mcp",
"args": [],
"env": {}
}
}
}
```
> If your AI client can't find `eip-mcp`, use the full path to the binary (see virtual environment note above).
### Verify
After restarting your AI client, you should see 16 tools available. Try asking:
> "Show me all trojan exploits"
### Troubleshooting
| Problem | Solution |
|---|---|
| MCP server not showing up | If using a venv, use the full absolute path to the `eip-mcp` binary |
| `command not found: eip-mcp` | Make sure your venv is activated, or use `pipx` which manages PATH automatically |
| `externally-managed-environment` | Use `pipx` or a virtual environment (see install instructions above) |
| Connection timeout errors | Check that you can reach `https://exploit-intel.com` from your machine |
| 0 tools showing | Restart Cursor/Claude Desktop after editing the MCP config |
## Demo
[](https://asciinema.org/a/hSVPAlO9qNqQIxug)
## What Questions Can You Ask?
Below are real questions tested against the live platform, with actual output.
---
### "Show me all the backdoored/trojan exploits"
Uses `search_exploits` with `llm_classification=trojan`:
```
Found 21 exploits (page 1/7):
★0 github hn1e13/test-mcp
CVE-2025-54135 HIGH CVSS:8.5 [markdown] trojan
AI: RCE | trivial | theoretical
!! Embedded AI automation commands disguised as configuration
!! Decoy Python script unrelated to the vulnerability
★0 github Rosemary1337/CVE-2025-6934
CVE-2025-6934 CRITICAL CVSS:9.8 [python] trojan
AI: other | moderate | theoretical
!! Obfuscated code execution
!! External payload decryption
★1 github Markusino488/cve-2025-8088
CVE-2025-8088 HIGH CVSS:8.8 [python] trojan
AI: other | moderate | reliable
!! Misleading README describing a security tool while the code drops malicious payloads
!! Suspicious download links in README pointing to the same ZIP file
```
21 exploits flagged as trojans by AI analysis. Each shows deception indicators explaining exactly how the trojan deceives users.
---
### "Find all reliable RCE exploits"
Uses `search_exploits` with `attack_type=RCE, reliability=reliable, sort=stars_desc`:
```
Found 17,720 exploits (page 1/3544):
★4275 nomisec zhzyker/exphub
CVE-2020-14882 CRITICAL CVSS:9.8 [] working_poc
AI: RCE | moderate | reliable
★3436 nomisec fullhunt/log4j-scan
CVE-2021-44228 CRITICAL CVSS:10.0 [] scanner
AI: RCE | moderate | reliable
★1848 nomisec kozmer/log4j-shell-poc
CVE-2021-44228 CRITICAL CVSS:10.0 [] working_poc
AI: RCE | moderate | reliable
★1835 github neex/phuip-fpizdam
CVE-2019-11043 HIGH CVSS:8.7 [] working_poc
AI: RCE | moderate | reliable
```
17,720 reliable RCE exploits. Filter further with `complexity=trivial` for easy wins or `requires_auth=false` for unauthenticated attacks.
---
### "Show me trivial SQL injection exploits that don't require auth"
Uses `search_exploits` with `attack_type=SQLi, complexity=trivial, requires_auth=false`:
```
Found 6,979 exploits (page 1/1396):
★0 github pwnpwnpur1n/CVE-2024-22983
CVE-2024-22983 HIGH CVSS:8.1 [php] writeup
AI: SQLi | trivial | reliable
★0 github security-n/CVE-2021-39379
CVE-2021-39379 CRITICAL CVSS:9.8 [] writeup
AI: SQLi | trivial | reliable
...
```
---
### "Give me all exploits for CVE-2024-3400"
Uses `search_exploits` with `cve=CVE-2024-3400, sort=stars_desc`:
```
Found 43 exploits (page 1/9):
★161 github h4x0r-dz/CVE-2024-3400
CVE-2024-3400 CRITICAL CVSS:10.0 [http] working_poc
★90 github W01fh4cker/CVE-2024-3400-RCE-Scan
CVE-2024-3400 CRITICAL CVSS:10.0 [python] working_poc
★72 github 0x0d3ad/CVE-2024-3400
CVE-2024-3400 CRITICAL CVSS:10.0 [python] working_poc
★30 github ihebski/CVE-2024-3400
CVE-2024-3400 CRITICAL CVSS:10.0 [http/network] working_poc
★14 github Chocapikk/CVE-2024-3400
CVE-2024-3400 CRITICAL CVSS:10.0 [python] working_poc
```
43 exploits, ranked by GitHub stars, with LLM quality classification.
---
### "How many Mitel exploits are there?"
Uses `search_exploits` with `vendor=mitel, has_code=true`:
```
Found 783 exploits (page 1/157):
exploitdb EDB-46666
CVE-2019-9591 MEDIUM CVSS:6.1 []
exploitdb EDB-32745
CVE-2014-0160 HIGH CVSS:7.5 [python]
★0 github lu4m575/CVE-2024-35286_scan.nse
CVE-2024-35286 CRITICAL CVSS:9.8 []
★17 github Chocapikk/CVE-2024-41713
CVE-2024-41713 CRITICAL CVSS:9.1 [python] working_poc
...
```
783 Mitel exploits with downloadable code, across all affected CVEs.
---
### "Who are the top exploit authors?"
Uses `list_authors`:
```
Exploit Authors (23,144 total):
metasploit 2098 exploits
Ihsan Sencan 1658 exploits
Google Security Research 1355 exploits
LiquidWorm 1336 exploits
Luigi Auriemma 629 exploits
High-Tech Bridge SA 613 exploits
Vulnerability-Lab 596 exploits
Gjoko 'LiquidWorm' Krstic 567 exploits
rgod 531 exploits
indoushka 517 exploits
```
---
### "Show me all exploits by Chocapikk"
Uses `get_author` with `author_name=Chocapikk`:
```
Author: Chocapikk
Exploits: 60 | Active since: 2017-04-25
Exploits:
★244 CVE-2026-21858 Chocapikk/CVE-2026-21858 working_poc
★179 CVE-2024-25600 Chocapikk/CVE-2024-25600 working_poc
★134 CVE-2024-45519 Chocapikk/CVE-2024-45519 working_poc
★99 CVE-2024-3273 Chocapikk/CVE-2024-3273 working_poc
★49 CVE-2024-56145 Chocapikk/CVE-2024-56145 working_poc
★47 CVE-2024-9474 Chocapikk/CVE-2024-9474 working_poc
★41 CVE-2025-55182 Chocapikk/CVE-2025-55182 working_poc
★41 CVE-2024-8504 Chocapikk/CVE-2024-8504 working_poc
★36 CVE-2024-27198 Chocapikk/CVE-2024-27198 working_poc
...
```
60 exploits by Chocapikk, ranked by GitHub stars, all classified as working PoCs.
---
### "What are the most common vulnerability types?"
Uses `list_cwes`:
```
CWE Categories (200 with vulnerabilities):
CWE-79 41774 vulns XSS
CWE-89 17788 vulns SQL Injection
CWE-787 13374 vulns Out-of-Bounds Write
CWE-119 13344 vulns Memory Corruption
CWE-20 11770 vulns Improper Input Validation
CWE-200 9555 vulns Information Disclosure
CWE-352 8710 vulns CSRF
CWE-125 8163 vulns Out-of-Bounds Read
CWE-22 8141 vulns Path Traversal
CWE-862 6683 vulns Missing Authorization
...
```
---
### "Tell me about SQL Injection (CWE-89)"
Uses `get_cwe` with `cwe_id=CWE-89`:
```
CWE-89: Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection')
Short label: SQL Injection
Exploit likelihood: High
Vulnerabilities: 17,788
Parent: CWE-943 (Improper Neutralization of Special Elements in Data Query Logic)
Description:
The product constructs all or part of an SQL command using externally-influenced
input from an upstream component, but it does not neutralize or incorrectly
neutralizes special elements that could modify the intended SQL command when it
is sent to a downstream component...
```
---
### "Which vendors have the most vulnerabilities?"
Uses `list_vendors`:
```
Top Vendors (200 total):
microsoft 13697 vulns
google 12451 vulns
linux 12096 vulns
oracle 10107 vulns
debian 10072 vulns
apple 8426 vulns
ibm 7981 vulns
adobe 6960 vulns
cisco 6526 vulns
redhat 5505 vulns
...
```
---
### "What critical Fortinet vulns are being actively exploited?"
Uses `search_vulnerabilities` with `vendor=fortinet, severity=critical, is_kev=true, sort=epss_desc`:
```
Found 24 vulnerabilities (page 1/5):
CVE-2018-13379 CRITICAL CVSS:9.1 EPSS:94.5% Exploits:14 [KEV]
Fortinet FortiProxy < 1.2.9 - Path Traversal
CVE-2022-40684 CRITICAL CVSS:9.8 EPSS:94.4% Exploits:30 [KEV]
Fortinet FortiProxy < 7.0.7 - Authentication Bypass
CVE-2023-48788 CRITICAL CVSS:9.8 EPSS:94.2% Exploits:1 [KEV] [NUCLEI]
Fortinet FortiClient Endpoint Management Server - SQL Injection
CVE-2024-55591 CRITICAL CVSS:9.8 EPSS:94.2% Exploits:8 [KEV]
Fortinet FortiProxy < 7.0.20 - Authentication Bypass
CVE-2022-42475 CRITICAL CVSS:9.8 EPSS:94.0% Exploits:7 [KEV]
Fortinet FortiOS < 5.0.14 - Buffer Overflow
```
---
### "Tell me about CVE-2019-0708 (BlueKeep)"
Uses `get_vulnerability` with `cve_id=CVE-2019-0708`:
```
============================================================
CVE-2019-0708 [CRITICAL] [KEV]
============================================================
Title: BlueKeep RDP Remote Windows Kernel Use After Free
CVSS: 9.8 EPSS: 94.5% (100.0th percentile)
Attack Vector: NETWORK | CWE: CWE-416 | Published: 2019-05-16 | KEV Added: 2021-11-03
EXPLOITS (127 total):
METASPLOIT MODULES:
- cve_2019_0708_bluekeep_rce.rb [ruby] Rank: manual
AI: RCE | complexity:complex | reliability:racy | target:Microsoft Windows 7 SP1
MITRE: T1059 - Command and Scripting Interpreter, T1068 - Exploitation for Privilege Escalation
VERIFIED (ExploitDB):
- EDB-47416 [ruby] verified
AI: RCE | complexity:complex | reliability:racy | target:Microsoft Windows RDP (7 SP1 / 2008 R2)
MITRE: T1068, T1210 - Exploitation of Remote Services
PROOF OF CONCEPT:
- ★1187 nomisec Ekultek/BlueKeep working_poc
AI: RCE | complexity:moderate | reliability:reliable | target:Windows RDP
MITRE: T1189 - Drive-by Compromise, T1068
- ★914 nomisec robertdavidgraham/rdpscan scanner
AI: info_leak | complexity:moderate | reliability:reliable
MITRE: T1046 - Network Service Scanning
...and 113 more PoCs
*** SUSPICIOUS / TROJAN ***:
- WARNING: ttsite/CVE-2019-0708- [TROJAN] — flagged by AI analysis
Summary: The repository is a scam and does not contain any exploit code.
Deception indicators:
- False claims about exploit availability
- Deceptive contact information
- No actual exploit code or technical details
```
Every exploit now shows AI analysis: attack type, complexity, reliability, target software, and MITRE ATT&CK techniques. Trojans show deception indicators explaining exactly how they deceive users.
---
### "Audit our stack: nginx, postgresql, redis"
Uses `audit_stack` with `technologies=nginx, postgresql, redis`:
```
STACK AUDIT RESULTS
========================================
--- NGINX (66 exploitable CVEs) ---
CVE-2023-44487 HIGH CVSS:7.5 EPSS:94.4% Exploits:22 [KEV]
HTTP/2 Rapid Reset DoS
CVE-2013-2028 CVSS:-- EPSS:92.8% Exploits:25
Nginx < 1.4.0 - Out-of-Bounds Write
CVE-2017-7529 HIGH CVSS:7.5 EPSS:91.9% Exploits:54
Nginx <1.14 - Info Disclosure
...and 56 more
--- POSTGRESQL (56 exploitable CVEs) ---
CVE-2019-9193 HIGH CVSS:7.2 EPSS:93.4% Exploits:41
PostgreSQL < 11.2 - OS Command Injection
CVE-2018-1058 HIGH CVSS:8.8 EPSS:82.7% Exploits:13
PostgreSQL < 9.3.22 - Improper Input Validation
...and 46 more
--- REDIS (39 exploitable CVEs) ---
CVE-2022-0543 CRITICAL CVSS:10.0 EPSS:94.4% Exploits:32 [KEV]
Redis Lua Sandbox Escape
CVE-2018-11218 CRITICAL CVSS:9.8 EPSS:80.3% Exploits:3
Redis < 3.2.12 - Out-of-Bounds Write
...and 29 more
Total: 30 findings shown across 3 technologies
```
---
### "Get me the Nuclei dorks for TeamCity"
Uses `get_nuclei_templates` with `cve_id=CVE-2024-27198`:
```
NUCLEI TEMPLATES (1):
Template: CVE-2024-27198 [critical] [verified]
Name: TeamCity < 2023.11.4 - Authentication Bypass
Author: DhiyaneshDk
Tags: cve, cve2024, teamcity, jetbrains, auth-bypass, kev
Recon Queries:
Shodan: http.component:"TeamCity" || http.title:teamcity
FOFA: title=teamcity
Google: intitle:teamcity
Run: nuclei -t CVE-2024-27198 -u https://target.com
```
---
### "Write a pentest finding for CVE-2024-3400"
Uses `generate_finding` with `cve_id=CVE-2024-3400, target=fw.corp.example.com, notes=Confirmed RCE via GlobalProtect`:
```markdown
# CVE-2024-3400: Palo Alto Networks PAN-OS Unauthenticated Remote Code Execution
**Severity:** CRITICAL
**CVSS v3 Score:** 10.0 (CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H)
**EPSS Score:** 94.3% probability of exploitation
**CISA KEV:** Yes — confirmed actively exploited in the wild
**CWE:** CWE-77, CWE-20
**Affected Target:** fw.corp.example.com
## Description
A command injection vulnerability in the GlobalProtect feature of PAN-OS...
## Exploit Availability (43 public exploits)
- **Metasploit:** panos_telemetry_cmd_exec.rb (rank: excellent)
## References
- https://security.paloaltonetworks.com/CVE-2024-3400
## Tester Notes
Confirmed RCE via GlobalProtect
```
---
### "List all Metasploit modules"
Uses `search_exploits` with `source=metasploit`:
```
Found 3,350 exploits (page 1/670):
metasploit modules/auxiliary/gather/ni8mare_cve_2026_21858.rb
CVE-2026-21858 CRITICAL CVSS:10.0 [ruby] working_poc
metasploit modules/exploits/multi/handler.rb
no-CVE ? [ruby]
metasploit modules/exploits/example_linux_priv_esc.rb
no-CVE ? [ruby]
...
```
3,350 Metasploit modules indexed.
---
## Security Model
This MCP server runs locally and proxies requests to the public EIP API over HTTPS.
### Input Validation
Every parameter passes through strict validation:
- **CVE/EIP IDs**: Regex `^(CVE|EIP)-\d{4}-\d{4,7}$`
- **Exploit IDs**: Positive integers, capped at 2^31
- **Strings**: Max 200 chars, null bytes rejected, control characters stripped
- **Numerics**: CVSS 0-10, EPSS 0-1, per_page 1-25
- **Enums**: Severity, sort, ecosystem validated against allowlists
- **File paths**: `..`, absolute paths, null bytes all blocked
- **Technology names**: Alphanumeric + dots/hyphens/spaces, max 5 items
### Response Safety
- Exploit code capped at 50KB
- All responses are plain text (no executable content)
- Error messages are generic (no internal API leakage)
- Trojan exploits are explicitly flagged
### Network Safety
- API base URL hardcoded to `https://exploit-intel.com`
- TLS verification enabled
- 30-second timeout on all calls
- Optional API key via `EIP_API_KEY` environment variable
## API Key (Optional)
For higher rate limits, set an API key in the MCP config:
```json
{
"mcpServers": {
"eip": {
"command": "eip-mcp",
"args": [],
"env": {
"EIP_API_KEY": "your-key-here"
}
}
}
}
```
No API key is required. The public API allows 60 requests/minute.
## Building Packages
### Build Dependencies
| Target | Requirements |
|---|---|
| `make build` | Python 3, `build` module (`pip install build`) |
| `make check` / `make pypi` | `twine` (`pip install twine`) |
| `make deb` | Docker |
| `make tag-release` | Python 3 (version bump only — Codeberg Actions handles the rest) |
| `make release` | All of the above + `tea` CLI ([codeberg.org/gitea/tea](https://codeberg.org/gitea/tea)) |
Install everything at once:
```bash
pip install build twine
# Docker: https://docs.docker.com/get-docker/
# tea CLI: https://codeberg.org/gitea/tea
```
### PyPI (wheel + sdist)
```bash
make build # build dist/*.whl and dist/*.tar.gz
make check # validate with twine
make pypi # upload to PyPI
```
### .deb Packages
Build for a single distro or all four supported targets:
```bash
make deb DISTRO=ubuntu-jammy # Ubuntu 22.04
make deb DISTRO=ubuntu-noble # Ubuntu 24.04
make deb DISTRO=debian-bookworm # Debian 12
make deb DISTRO=kali # Kali Rolling
make deb # all four
```
### Releasing
**One-time setup:** add `PYPI_API_TOKEN` and `RELEASE_TOKEN` as repository secrets in Codeberg (Settings → Actions → Secrets).
**Automated release (recommended)** — bumps version, commits, tags, and pushes. Codeberg Actions builds PyPI packages + all 4 `.deb`s, uploads to PyPI, and creates a release with artifacts attached:
```bash
make tag-release VERSION=0.2.0
```
**Local release (alternative)** — does everything locally without CI:
```bash
make release VERSION=0.2.0
```
## Dependencies
- `mcp>=1.2.0` — Official MCP Python SDK
- `httpx>=0.27.0` — HTTP client
- Python 3.10+
## License
MIT
| text/markdown | Exploit Intelligence Platform | null | null | null | null | mcp, security, exploits, vulnerability, cve, ai, model-context-protocol | [
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Information Technology",
"Topic :: Security",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12"
] | [] | null | null | >=3.10 | [] | [] | [] | [
"mcp>=1.2.0",
"httpx>=0.27.0",
"mcp>=1.6.0; extra == \"http\"",
"uvicorn>=0.27.0; extra == \"http\"",
"starlette>=0.36.0; extra == \"http\"",
"sse-starlette>=1.6.0; extra == \"http\""
] | [] | [] | [] | [
"Homepage, https://exploit-intel.com",
"Repository, https://codeberg.org/exploit-intel/eip-mcp"
] | twine/6.2.0 CPython/3.12.12 | 2026-02-21T05:01:52.051748 | eip_mcp-0.2.9.tar.gz | 42,362 | 7a/61/c8e2c8165125fe9570128790267b675a163b9e16c9538766aba2a3eda433/eip_mcp-0.2.9.tar.gz | source | sdist | null | false | eb963404ab8cc04e1f66be08d9b657bf | c5bd74d8f3bbccb7b7d3d9d6aa075473a070ef5a3416526e24ffb7154ce361db | 7a61c8e2c8165125fe9570128790267b675a163b9e16c9538766aba2a3eda433 | MIT | [
"LICENSE"
] | 237 |
2.4 | predicate-authority | 0.4.7 | Deterministic pre-execution authority layer for AI agents. | # predicate-authority
`predicate-authority` is a deterministic pre-execution authority layer for AI agents.
It binds identity, policy, and runtime evidence so risky actions are authorized
before execution and denied fail-closed when checks do not pass.
Docs: https://www.PredicateSystems.ai/docs
Github Repo: https://github.com/PredicateSystems/predicate-authority
Core pieces:
- `PolicyEngine` for allow/deny + required verification labels,
- `ActionGuard` for pre-action `authorize` / `enforce`,
- `LocalMandateSigner` for signed short-lived mandates,
- `InMemoryProofLedger` and optional `OpenTelemetryTraceEmitter`,
- typed integration adapters (including `sdk-python` mapping helpers),
- control-plane client primitives for shipping proof and usage batches to hosted APIs,
- local identity registry primitives (ephemeral task identities + local flush queue).
## Quick usage example
```python
from predicate_authority import ActionGuard, InMemoryProofLedger, LocalMandateSigner, PolicyEngine
from predicate_contracts import (
ActionRequest,
ActionSpec,
PolicyEffect,
PolicyRule,
PrincipalRef,
StateEvidence,
VerificationEvidence,
)
guard = ActionGuard(
policy_engine=PolicyEngine(
rules=(
PolicyRule(
name="allow-orders",
effect=PolicyEffect.ALLOW,
principals=("agent:orders",),
actions=("http.post",),
resources=("https://api.vendor.com/orders",),
),
)
),
mandate_signer=LocalMandateSigner(secret_key="replace-with-strong-secret"),
proof_ledger=InMemoryProofLedger(),
)
request = ActionRequest(
principal=PrincipalRef(principal_id="agent:orders", tenant_id="tenant-a"),
action_spec=ActionSpec(
action="http.post",
resource="https://api.vendor.com/orders",
intent="create order",
),
state_evidence=StateEvidence(source="backend", state_hash="sha256:example"),
verification_evidence=VerificationEvidence(),
)
decision = guard.authorize(request)
print("allowed=", decision.allowed, "reason=", decision.reason.value)
```
## Entra compatibility demo (capability-gated OBO)
```bash
python examples/delegation/entra_obo_compat_demo.py \
--tenant-id "$ENTRA_TENANT_ID" \
--client-id "$ENTRA_CLIENT_ID" \
--client-secret "$ENTRA_CLIENT_SECRET" \
--scope "${ENTRA_SCOPE:-api://predicate-authority/.default}"
```
## OIDC compatibility demo (capability-gated token exchange)
```bash
python examples/delegation/oidc_compat_demo.py \
--issuer "$OIDC_ISSUER" \
--client-id "$OIDC_CLIENT_ID" \
--client-secret "$OIDC_CLIENT_SECRET" \
--audience "$OIDC_AUDIENCE" \
--scope "${OIDC_SCOPE:-authority:check}"
```
If your provider supports token exchange and you have a subject token:
```bash
python examples/delegation/oidc_compat_demo.py \
--issuer "$OIDC_ISSUER" \
--client-id "$OIDC_CLIENT_ID" \
--client-secret "$OIDC_CLIENT_SECRET" \
--audience "$OIDC_AUDIENCE" \
--scope "${OIDC_SCOPE:-authority:check}" \
--subject-token "$OIDC_SUBJECT_TOKEN" \
--supports-token-exchange
```
## Local IdP quick example
```python
from predicate_authority import LocalIdPBridge, LocalIdPBridgeConfig
from predicate_contracts import PrincipalRef, StateEvidence
bridge = LocalIdPBridge(
LocalIdPBridgeConfig(
issuer="http://localhost/predicate-local-idp",
audience="api://predicate-authority",
signing_key="replace-with-strong-secret",
token_ttl_seconds=300,
)
)
token = bridge.exchange_token(
PrincipalRef(principal_id="agent:local", tenant_id="tenant-a"),
StateEvidence(source="backend", state_hash="sha256:local-state"),
)
print(token.provider.value, token.access_token[:24] + "...")
```
| text/markdown | Predicate Systems | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [
"predicate-contracts<0.5.0,>=0.1.0",
"pyyaml>=6.0",
"cryptography>=42.0.0",
"opentelemetry-api>=1.24.0; extra == \"telemetry\""
] | [] | [] | [] | [
"Documentation, https://www.PredicateSystems.ai/docs"
] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T05:01:20.290156 | predicate_authority-0.4.7.tar.gz | 46,341 | b4/d1/ef26ab522c8e1ddab0ed8eff377a2e8ded62b313653dda66970affde940f/predicate_authority-0.4.7.tar.gz | source | sdist | null | false | 15652c6ae4d8eeeba330005e96533a1d | 3938844de385fab386909d6e951e3ceffb3c109abd3f62697f43fe837a798046 | b4d1ef26ab522c8e1ddab0ed8eff377a2e8ded62b313653dda66970affde940f | MIT OR Apache-2.0 | [] | 293 |
2.4 | predicate-contracts | 0.4.7 | Shared typed contracts for Predicate authority and integrations. | # predicate-contracts
`predicate-contracts` is the shared contract package for Predicate authority workflows.
It contains:
- typed data contracts (`ActionRequest`, `PolicyRule`, `AuthorizationDecision`, etc.),
- integration protocols (`StateEvidenceProvider`, `VerificationEvidenceProvider`, `TraceEmitter`),
- no runtime dependency on `sdk-python` internals or authority runtime logic.
| text/markdown | Predicate Systems | null | null | null | null | null | [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.11",
"Typing :: Typed"
] | [] | null | null | >=3.11 | [] | [] | [] | [] | [] | [] | [] | [] | twine/6.2.0 CPython/3.11.14 | 2026-02-21T05:01:02.009556 | predicate_contracts-0.4.7.tar.gz | 8,384 | a9/56/bcc53a819c9c2402333fcd80b8853155d123b861fd633cf9cd1de6b0418c/predicate_contracts-0.4.7.tar.gz | source | sdist | null | false | 36d24c07a094fe5b63c9674230be2e2e | 989a8ccce5b1942b5d7f6e3e550344dff90a10512e93d78d718ecced5004d359 | a956bcc53a819c9c2402333fcd80b8853155d123b861fd633cf9cd1de6b0418c | MIT OR Apache-2.0 | [] | 305 |
2.4 | openbrowser-ai | 0.1.26 | Agentic browser automation using LangGraph and raw CDP | # OpenBrowser
**Automating Walmart Product Scraping:**
https://github.com/user-attachments/assets/ae5d74ce-0ac6-46b0-b02b-ff5518b4b20d
**OpenBrowserAI Automatic Flight Booking:**
https://github.com/user-attachments/assets/632128f6-3d09-497f-9e7d-e29b9cb65e0f
[](https://pypi.org/project/openbrowser-ai/)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/billy-enrizky/openbrowser-ai/actions)
**AI-powered browser automation using CodeAgent and CDP (Chrome DevTools Protocol)**
OpenBrowser is a framework for intelligent browser automation. It combines direct CDP communication with a CodeAgent architecture, where the LLM writes Python code executed in a persistent namespace, to navigate, interact with, and extract information from web pages autonomously.
## Table of Contents
- [Documentation](#documentation)
- [Key Features](#key-features)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#configuration)
- [Supported LLM Providers](#supported-llm-providers)
- [Claude Code Plugin](#claude-code-plugin)
- [Codex](#codex)
- [OpenCode](#opencode)
- [OpenClaw](#openclaw)
- [MCP Server](#mcp-server)
- [MCP Benchmark: Why OpenBrowser](#mcp-benchmark-why-openbrowser)
- [CLI Usage](#cli-usage)
- [Project Structure](#project-structure)
- [Testing](#testing)
- [Contributing](#contributing)
- [License](#license)
- [Contact](#contact)
## Documentation
**Full documentation**: [https://docs.openbrowser.me](https://docs.openbrowser.me)
## Key Features
- **CodeAgent Architecture** - LLM writes Python code in a persistent Jupyter-like namespace for browser automation
- **Raw CDP Communication** - Direct Chrome DevTools Protocol for maximum control and speed
- **Vision Support** - Screenshot analysis for visual understanding of pages
- **12+ LLM Providers** - OpenAI, Anthropic, Google, Groq, AWS Bedrock, Azure OpenAI, Ollama, and more
- **MCP Server** - Model Context Protocol support for Claude Desktop integration
- **Video Recording** - Record browser sessions as video files
## Installation
```bash
pip install openbrowser-ai
```
### With Optional Dependencies
```bash
# Install with all LLM providers
pip install openbrowser-ai[all]
# Install specific providers
pip install openbrowser-ai[anthropic] # Anthropic Claude
pip install openbrowser-ai[groq] # Groq
pip install openbrowser-ai[ollama] # Ollama (local models)
pip install openbrowser-ai[aws] # AWS Bedrock
pip install openbrowser-ai[azure] # Azure OpenAI
# Install with video recording support
pip install openbrowser-ai[video]
```
### Install Browser
```bash
uvx openbrowser-ai install
# or
playwright install chromium
```
## Quick Start
### Basic Usage
```python
import asyncio
from openbrowser import CodeAgent, ChatGoogle
async def main():
agent = CodeAgent(
task="Go to google.com and search for 'Python tutorials'",
llm=ChatGoogle(model="gemini-2.0-flash"),
)
result = await agent.run()
print(f"Result: {result}")
asyncio.run(main())
```
### With Different LLM Providers
```python
from openbrowser import CodeAgent, ChatOpenAI, ChatAnthropic, ChatGoogle
# OpenAI
agent = CodeAgent(task="...", llm=ChatOpenAI(model="gpt-4o"))
# Anthropic
agent = CodeAgent(task="...", llm=ChatAnthropic(model="claude-sonnet-4-6"))
# Google Gemini
agent = CodeAgent(task="...", llm=ChatGoogle(model="gemini-2.0-flash"))
```
### Using Browser Session Directly
```python
import asyncio
from openbrowser import BrowserSession, BrowserProfile
async def main():
profile = BrowserProfile(
headless=True,
viewport_width=1920,
viewport_height=1080,
)
session = BrowserSession(browser_profile=profile)
await session.start()
await session.navigate_to("https://example.com")
screenshot = await session.screenshot()
await session.stop()
asyncio.run(main())
```
## Configuration
### Environment Variables
```bash
# Google (recommended)
export GOOGLE_API_KEY="..."
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Groq
export GROQ_API_KEY="gsk_..."
# AWS Bedrock
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_DEFAULT_REGION="us-west-2"
# Azure OpenAI
export AZURE_OPENAI_API_KEY="..."
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
```
### BrowserProfile Options
```python
from openbrowser import BrowserProfile
profile = BrowserProfile(
headless=True,
viewport_width=1280,
viewport_height=720,
disable_security=False,
extra_chromium_args=["--disable-gpu"],
record_video_dir="./recordings",
proxy={
"server": "http://proxy.example.com:8080",
"username": "user",
"password": "pass",
},
)
```
## Supported LLM Providers
| Provider | Class | Models |
|----------|-------|--------|
| **Google** | `ChatGoogle` | gemini-2.5-flash, gemini-2.5-pro |
| **OpenAI** | `ChatOpenAI` | gpt-4.1, o4-mini, o3 |
| **Anthropic** | `ChatAnthropic` | claude-sonnet-4-6, claude-opus-4-6 |
| **Groq** | `ChatGroq` | llama-4-scout, qwen3-32b |
| **AWS Bedrock** | `ChatAWSBedrock` | anthropic.claude-sonnet-4-6, amazon.nova-pro |
| **AWS Bedrock (Anthropic)** | `ChatAnthropicBedrock` | Claude models via Anthropic Bedrock SDK |
| **Azure OpenAI** | `ChatAzureOpenAI` | Any Azure-deployed model |
| **OpenRouter** | `ChatOpenRouter` | Any model on openrouter.ai |
| **DeepSeek** | `ChatDeepSeek` | deepseek-chat, deepseek-reasoner |
| **Cerebras** | `ChatCerebras` | llama3.1-8b, qwen-3-coder-480b |
| **Ollama** | `ChatOllama` | llama3.1, deepseek-r1 (local) |
| **OCI** | `ChatOCIRaw` | Oracle Cloud GenAI models |
| **Browser-Use** | `ChatBrowserUse` | External LLM service |
## Claude Code Plugin
Install OpenBrowser as a Claude Code plugin:
```bash
# Add the marketplace (one-time)
claude plugin marketplace add billy-enrizky/openbrowser-ai
# Install the plugin
claude plugin install openbrowser@openbrowser-ai
```
This installs the MCP server and 5 built-in skills:
| Skill | Description |
|-------|-------------|
| `web-scraping` | Extract structured data, handle pagination |
| `form-filling` | Fill forms, login flows, multi-step wizards |
| `e2e-testing` | Test web apps by simulating user interactions |
| `page-analysis` | Analyze page content, structure, metadata |
| `accessibility-audit` | Audit pages for WCAG compliance |
See [plugin/README.md](plugin/README.md) for detailed tool parameter documentation.
## Codex
OpenBrowser works with OpenAI Codex via native skill discovery.
### Quick Install
Tell Codex:
```
Fetch and follow instructions from https://raw.githubusercontent.com/billy-enrizky/openbrowser-ai/refs/heads/main/.codex/INSTALL.md
```
### Manual Install
```bash
# Clone the repository
git clone https://github.com/billy-enrizky/openbrowser-ai.git ~/.codex/openbrowser
# Symlink skills for native discovery
mkdir -p ~/.agents/skills
ln -s ~/.codex/openbrowser/plugin/skills ~/.agents/skills/openbrowser
# Restart Codex
```
Then configure the MCP server in your project (see [MCP Server](#mcp-server) below).
Detailed docs: [.codex/INSTALL.md](.codex/INSTALL.md)
## OpenCode
OpenBrowser works with [OpenCode.ai](https://opencode.ai) via plugin and skill symlinks.
### Quick Install
Tell OpenCode:
```
Fetch and follow instructions from https://raw.githubusercontent.com/billy-enrizky/openbrowser-ai/refs/heads/main/.opencode/INSTALL.md
```
### Manual Install
```bash
# Clone the repository
git clone https://github.com/billy-enrizky/openbrowser-ai.git ~/.config/opencode/openbrowser
# Create directories
mkdir -p ~/.config/opencode/plugins ~/.config/opencode/skills
# Symlink plugin and skills
ln -s ~/.config/opencode/openbrowser/.opencode/plugins/openbrowser.js ~/.config/opencode/plugins/openbrowser.js
ln -s ~/.config/opencode/openbrowser/plugin/skills ~/.config/opencode/skills/openbrowser
# Restart OpenCode
```
Then configure the MCP server in your project (see [MCP Server](#mcp-server) below).
Detailed docs: [.opencode/INSTALL.md](.opencode/INSTALL.md)
## OpenClaw
[OpenClaw](https://openclaw.ai) does not natively support MCP servers, but the community
[openclaw-mcp-adapter](https://github.com/androidStern-personal/openclaw-mcp-adapter) plugin
bridges MCP servers to OpenClaw agents.
1. Install the MCP adapter plugin (see its README for setup).
2. Add OpenBrowser as an MCP server in `~/.openclaw/openclaw.json`:
```json
{
"plugins": {
"entries": {
"mcp-adapter": {
"enabled": true,
"config": {
"servers": [
{
"name": "openbrowser",
"transport": "stdio",
"command": "uvx",
"args": ["openbrowser-ai[mcp]", "--mcp"]
}
]
}
}
}
}
}
```
The `execute_code` tool will be registered as a native OpenClaw agent tool.
For OpenClaw plugin documentation, see [docs.openclaw.ai/tools/plugin](https://docs.openclaw.ai/tools/plugin).
## MCP Server
OpenBrowser includes an MCP (Model Context Protocol) server that exposes browser automation as tools for AI assistants like Claude. No external LLM API keys required. The MCP client (Claude) provides the intelligence.
### Quick Setup
**Claude Code**: add to your project's `.mcp.json`:
```json
{
"mcpServers": {
"openbrowser": {
"command": "uvx",
"args": ["openbrowser-ai[mcp]", "--mcp"]
}
}
}
```
**Claude Desktop**: add to `~/Library/Application Support/Claude/claude_desktop_config.json`:
```json
{
"mcpServers": {
"openbrowser": {
"command": "uvx",
"args": ["openbrowser-ai[mcp]", "--mcp"],
"env": {
"OPENBROWSER_HEADLESS": "true"
}
}
}
}
```
**Run directly:**
```bash
uvx openbrowser-ai[mcp] --mcp
```
### Tool
The MCP server exposes a single `execute_code` tool that runs Python code in a persistent namespace with browser automation functions. The LLM writes Python code to navigate, interact, and extract data, returning only what was explicitly requested.
**Available functions** (all async, use `await`):
| Category | Functions |
|----------|-----------|
| **Navigation** | `navigate(url, new_tab)`, `go_back()`, `wait(seconds)` |
| **Interaction** | `click(index)`, `input_text(index, text, clear)`, `scroll(down, pages, index)`, `send_keys(keys)`, `upload_file(index, path)` |
| **Dropdowns** | `select_dropdown(index, text)`, `dropdown_options(index)` |
| **Tabs** | `switch(tab_id)`, `close(tab_id)` |
| **JavaScript** | `evaluate(code)`: run JS in page context, returns Python objects |
| **State** | `browser.get_browser_state_summary()`: get page metadata and interactive elements |
| **CSS** | `get_selector_from_index(index)`: get CSS selector for an element |
| **Completion** | `done(text, success)`: signal task completion |
**Pre-imported libraries**: `json`, `csv`, `re`, `datetime`, `asyncio`, `Path`, `requests`, `numpy`, `pandas`, `matplotlib`, `BeautifulSoup`
### Configuration
| Environment Variable | Description | Default |
|---------------------|-------------|---------|
| `OPENBROWSER_HEADLESS` | Run browser without GUI | `false` |
| `OPENBROWSER_ALLOWED_DOMAINS` | Comma-separated domain whitelist | (none) |
## MCP Benchmark: Why OpenBrowser
### E2E LLM Benchmark (6 Real-World Tasks, N=5 runs)
Six real-world browser tasks run through Claude Sonnet 4.6 on AWS Bedrock (Converse API) with a server-agnostic system prompt. The LLM autonomously decides which tools to call and when the task is complete. 5 runs per server with 10,000-sample bootstrap CIs. All tasks run against live websites.
| # | Task | Description | Target Site |
|:-:|------|-------------|-------------|
| 1 | **fact_lookup** | Navigate to a Wikipedia article and extract specific facts (creator and year) | en.wikipedia.org |
| 2 | **form_fill** | Fill out a multi-field form (text input, radio button, checkbox) and submit | httpbin.org/forms/post |
| 3 | **multi_page_extract** | Extract the titles of the top 5 stories from a dynamic page | news.ycombinator.com |
| 4 | **search_navigate** | Search Wikipedia, click a result, and extract specific information | en.wikipedia.org |
| 5 | **deep_navigation** | Navigate to a GitHub repo and find the latest release version number | github.com |
| 6 | **content_analysis** | Analyze page structure: count headings, links, and paragraphs | example.com |
<p align="center">
<img src="benchmarks/benchmark_comparison.png" alt="E2E LLM Benchmark: MCP Server Comparison" width="800" />
</p>
| MCP Server | Pass Rate | Duration (mean +/- std) | Tool Calls | Bedrock API Tokens |
|------------|:---------:|------------------------:|-----------:|-------------------:|
| **Playwright MCP** (Microsoft) | 100% | 92.2 +/- 11.4s | 11.0 +/- 1.4 | 150,248 |
| **Chrome DevTools MCP** (Google) | 100% | 128.8 +/- 6.2s | 19.8 +/- 0.4 | 310,856 |
| **OpenBrowser MCP** | 100% | 103.1 +/- 16.4s | 15.0 +/- 3.9 | **49,423** |
OpenBrowser uses **3x fewer tokens** than Playwright and **6.3x fewer** than Chrome DevTools, measured via Bedrock Converse API `usage` field (the actual billed tokens including system prompt, tool schemas, conversation history, and tool results).
### Cost per Benchmark Run (6 Tasks)
Based on Bedrock API token usage (input + output tokens at respective rates).
| Model | Playwright MCP | Chrome DevTools MCP | OpenBrowser MCP |
|-------|---------------:|--------------------:|----------------:|
| Claude Sonnet 4.6 ($3/$15 per M) | $0.47 | $0.96 | **$0.18** |
| Claude Opus 4.6 ($5/$25 per M) | $0.78 | $1.59 | **$0.30** |
### Why the Difference
Playwright and Chrome DevTools return full page accessibility snapshots as tool output (~124K-135K tokens for Wikipedia). The LLM reads the entire snapshot to find what it needs.
OpenBrowser uses a CodeAgent architecture (single `execute_code` tool). The LLM writes Python code that processes browser state server-side and returns only extracted results (~30-1,000 chars per call). The full page content never enters the LLM context window.
```
Playwright: navigate to Wikipedia -> 478,793 chars (full a11y tree returned to LLM)
OpenBrowser: navigate to Wikipedia -> 42 chars (page title only, state processed in code)
evaluate JS for infobox -> 896 chars (just the extracted data)
```
[Full comparison with methodology](https://docs.openbrowser.me/comparison)
## CLI Usage
```bash
# Run a browser automation task
uvx openbrowser-ai -p "Search for Python tutorials on Google"
# Install browser
uvx openbrowser-ai install
# Run MCP server
uvx openbrowser-ai[mcp] --mcp
```
## Project Structure
```
openbrowser-ai/
├── .claude-plugin/ # Claude Code marketplace config
├── .codex/ # Codex integration
│ └── INSTALL.md
├── .opencode/ # OpenCode integration
│ ├── INSTALL.md
│ └── plugins/openbrowser.js
├── plugin/ # Plugin package (skills + MCP config)
│ ├── .claude-plugin/
│ ├── .mcp.json
│ └── skills/ # 5 browser automation skills
├── src/openbrowser/
│ ├── __init__.py # Main exports
│ ├── cli.py # CLI commands
│ ├── config.py # Configuration
│ ├── actor/ # Element interaction
│ ├── agent/ # LangGraph agent
│ ├── browser/ # CDP browser control
│ ├── code_use/ # Code agent
│ ├── dom/ # DOM extraction
│ ├── llm/ # LLM providers
│ ├── mcp/ # MCP server
│ └── tools/ # Action registry
├── benchmarks/ # MCP benchmarks and E2E tests
│ ├── playwright_benchmark.py
│ ├── cdp_benchmark.py
│ ├── openbrowser_benchmark.py
│ └── e2e_published_test.py
└── tests/ # Test suite
```
## Testing
```bash
# Run unit tests
pytest tests/
# Run with verbose output
pytest tests/ -v
# E2E test the MCP server against the published PyPI package
uv run python benchmarks/e2e_published_test.py
```
### Benchmarks
Run individual MCP server benchmarks (JSON-RPC stdio, 5-step Wikipedia workflow):
```bash
uv run python benchmarks/openbrowser_benchmark.py # OpenBrowser MCP
uv run python benchmarks/playwright_benchmark.py # Playwright MCP
uv run python benchmarks/cdp_benchmark.py # Chrome DevTools MCP
```
Results are written to `benchmarks/*_results.json`. See [full comparison](https://docs.openbrowser.me/comparison) for methodology.
## Production deployment
AWS production infrastructure (VPC, EC2 backend, API Gateway, Cognito, DynamoDB, ECR, S3 + CloudFront) is defined in Terraform. See **[infra/production/terraform/README.md](infra/production/terraform/README.md)** for architecture, prerequisites, and step-by-step deploy (ECR -> build/push image -> `terraform apply`).
## Contributing
Contributions are welcome! Please:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Contact
- **Email**: billy.suharno@gmail.com
- **GitHub**: [@billy-enrizky](https://github.com/billy-enrizky)
- **Repository**: [github.com/billy-enrizky/openbrowser-ai](https://github.com/billy-enrizky/openbrowser-ai)
- **Documentation**: [https://docs.openbrowser.me](https://docs.openbrowser.me)
---
**Made with love for the AI automation community**
| text/markdown | null | null | null | null | null | null | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"aiofiles",
"boto3>=1.36.0",
"bubus>=1.5.6",
"cdp-use",
"click>=8.1.8",
"google-genai>=0.2.0",
"httpx>=0.28.1",
"imageio-ffmpeg>=0.6.0",
"imageio>=2.37.2",
"langchain-core>=0.3.0",
"langchain-openai>=0.2.0",
"langgraph",
"litellm==1.80.0",
"markdownify>=0.11.6",
"numpy>=2.4.0",
"openai... | [] | [] | [] | [] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T04:58:21.181037 | openbrowser_ai-0.1.26.tar.gz | 2,420,200 | 2d/90/7dbd2553e28af1eb8d3c8479cc4a500d16dd22dc716093c60c833b56fff5/openbrowser_ai-0.1.26.tar.gz | source | sdist | null | false | a5653e5c09130532c977a97e0a0b07b0 | 41a648049888b92da24d0b4c1af789cfb5c4d0100bb1e4ea3844f33773ee11d0 | 2d907dbd2553e28af1eb8d3c8479cc4a500d16dd22dc716093c60c833b56fff5 | null | [
"LICENSE"
] | 224 |
2.4 | tepilora-mcp | 0.1.6 | MCP server for Tepilora financial API | # Tepilora MCP Server
[](https://pypi.org/project/tepilora-mcp/)
[](https://pypi.org/project/tepilora-mcp/)
[](LICENSE)
MCP (Model Context Protocol) server for the [Tepilora](https://pypi.org/project/Tepilora/) financial API.
Gives AI assistants (Claude, Codex, etc.) native access to **226 financial data operations** — securities search, portfolio analytics, news, bonds, and more.
## Features
- **16 curated tools** in default mode, **234 tools** in full mode
- **Async client** (`AsyncTepiloraClient`) — non-blocking, optimized for MCP
- **Smart caching** — TTL + LRU eviction, skips mutating operations
- **Credit tracking** — per-session usage limits with configurable caps
- **Error handling** — user-friendly messages instead of raw tracebacks
- **Arrow IPC streaming** — binary format for large result sets
## Install
```bash
pip install tepilora-mcp
```
## Quick Start
### Claude Desktop
Add to your `claude_desktop_config.json`:
```json
{
"mcpServers": {
"tepilora": {
"command": "tepilora-mcp",
"env": {
"TEPILORA_API_KEY": "your-api-key"
}
}
}
}
```
### Claude Code
```bash
claude mcp add tepilora tepilora-mcp -e TEPILORA_API_KEY=your-api-key
```
### Run Directly
```bash
export TEPILORA_API_KEY=your-api-key
tepilora-mcp
```
## Available Tools
### Discovery (4 tools)
| Tool | Description |
|------|-------------|
| `list_namespaces` | List all 24 API namespaces with operation counts |
| `list_operations` | List operations for a namespace |
| `describe_operation` | Get parameter details for any operation |
| `call_operation` | Execute any of the 226 operations |
### Curated (9 tools)
| Tool | Description |
|------|-------------|
| `search_securities` | Search stocks, ETFs, bonds, funds |
| `get_security_details` | Get security information |
| `get_price_history` | Historical price data |
| `create_portfolio` | Create a portfolio |
| `get_portfolio_returns` | Portfolio return analysis |
| `run_analytics` | 68 analytics functions (summary-first: preview + MCP resource for full data) |
| `search_news` | Search financial news |
| `screen_bonds` | Screen bonds by criteria |
| `get_yield_curve` | Yield curve data |
### Utility (3 tools)
| Tool | Description |
|------|-------------|
| `clear_cache` | Clear the in-memory result cache |
| `get_credit_usage` | View session credit usage and limits |
| `reset_credits` | Reset the session credit counter |
### Streaming (1 tool)
| Tool | Description |
|------|-------------|
| `call_operation_arrow_stream` | Call any operation in Arrow IPC binary format |
### Full Mode (opt-in)
Set `TEPILORA_MCP_FULL_TOOLS=true` to expose all 226 operations as individual tools (218 additional tools on top of the 16 default).
## Configuration
| Environment Variable | Required | Default | Description |
|---------------------|----------|---------|-------------|
| `TEPILORA_API_KEY` | Yes | - | Your Tepilora API key |
| `TEPILORA_BASE_URL` | No | `https://tepiloradata.com` | API base URL |
| `TEPILORA_FALLBACK_URL` | No | `http://49.13.34.1` | Fallback API URL (used if base URL returns HTML) |
| `TEPILORA_MCP_FULL_TOOLS` | No | `false` | Register all 226 operations as tools |
| `TEPILORA_MCP_TIMEOUT` | No | `30` | Request timeout in seconds |
| `TEPILORA_MCP_CACHE_TTL` | No | `300` | Cache TTL in seconds (`0` disables cache) |
| `TEPILORA_MCP_CACHE_MAX_SIZE` | No | `1000` | Max cached entries (LRU eviction) |
| `TEPILORA_MCP_CREDIT_LIMIT` | No | `0` | Session credit cap (`0` = unlimited) |
## Caching
Results are cached in memory with a configurable TTL (default 5 minutes). Mutating operations (`create`, `update`, `delete`, `run`, etc.) are never cached. Use the `clear_cache` tool to manually flush.
## Credit Tracking
Each API operation has a credit cost (defined in the SDK schema). Set `TEPILORA_MCP_CREDIT_LIMIT` to cap per-session usage. Use `get_credit_usage` to monitor and `reset_credits` to start fresh.
## Error Handling
All tools return structured error messages instead of raw exceptions:
```json
{
"error": "Rate limit reached: wait and retry, or reduce request frequency.",
"details": "HTTPStatusError: 429 Too Many Requests"
}
```
Handled errors: HTTP 401/403/404/429/5xx, timeouts, connection failures, SDK errors, invalid parameters.
## API Coverage
24 namespaces, 226 operations:
| Namespace | Ops | Examples |
|-----------|-----|---------|
| securities | 12 | search, filter, history, facets |
| portfolio | 19 | create, returns, attribution, optimize |
| analytics | 68 | rolling volatility, Sharpe, drawdown, VaR |
| news | 7 | search, latest, trending |
| bonds | 7 | analyze, screen, curve, spread |
| stocks | 9 | technicals, screening, peers |
| options | 6 | pricing, Greeks, IV |
| macro | 6 | economic indicators, calendar |
| esg | 5 | scores, screening |
| *+ 15 more* | | alerts, audit, billing, clients, documents, ... |
## Requirements
- Python 3.10+
- [`Tepilora`](https://pypi.org/project/Tepilora/) >= 0.3.2
- [`fastmcp`](https://pypi.org/project/fastmcp/) >= 2.14, < 3
## License
MIT
| text/markdown | null | Tepilora <info@tepiloradata.com> | null | null | null | tepilora, mcp, finance, api, llm, claude | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Office/Business :: Financial",
"Topic :: Software ... | [] | null | null | >=3.10 | [] | [] | [] | [
"Tepilora>=0.3.2",
"fastmcp<3,>=2.14",
"pytest>=7; extra == \"dev\"",
"pytest-asyncio>=0.21; extra == \"dev\""
] | [] | [] | [] | [
"Homepage, https://github.com/Admintepilora/TepiloraMCP",
"Repository, https://github.com/Admintepilora/TepiloraMCP",
"Bug Tracker, https://github.com/Admintepilora/TepiloraMCP/issues",
"Documentation, https://github.com/Admintepilora/TepiloraMCP#readme"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T04:58:08.299781 | tepilora_mcp-0.1.6.tar.gz | 24,821 | c3/da/732ac2cb2485601c3a72a8aa20cd1d183395fdce5c0fa71f2506ab09163e/tepilora_mcp-0.1.6.tar.gz | source | sdist | null | false | b3c7885e76dce8a303ee1573b8c86a24 | 1d01a5a8e6a7e728422e8fb5a31fe7ca4ddfdc0e73f02192666364b1dd3739e1 | c3da732ac2cb2485601c3a72a8aa20cd1d183395fdce5c0fa71f2506ab09163e | MIT | [
"LICENSE"
] | 224 |
2.4 | opa-python-client | 2.0.4 | Client for connection to the OPA service |
# OpaClient - Open Policy Agent Python Client
[](https://raw.githubusercontent.com/Turall/OPA-python-client/master/LICENSE)
[](https://github.com/Turall/OPA-python-client/stargazers)
[](https://github.com/Turall/OPA-python-client/network)
[](https://github.com/Turall/OPA-python-client/issues)
[](https://pepy.tech/project/opa-python-client)
OpaClient is a Python client library designed to interact with the [Open Policy Agent (OPA)](https://www.openpolicyagent.org/). It supports both **synchronous** and **asynchronous** requests, making it easy to manage policies, data, and evaluate rules in OPA servers.
## Features
- **Manage Policies**: Create, update, retrieve, and delete policies.
- **Manage Data**: Create, update, retrieve, and delete data in OPA.
- **Evaluate Policies**: Use input data to evaluate policies and return decisions.
- **Synchronous & Asynchronous**: Choose between sync or async operations to suit your application.
- **SSL/TLS Support**: Communicate securely with SSL/TLS, including client certificates.
- **Customizable**: Use custom headers, timeouts, and other configurations.
## Installation
You can install the OpaClient package via `pip`:
```bash
pip install opa-python-client
```
## Quick Start
### Synchronous Client Example
```python
from opa_client.opa import OpaClient
# Initialize the OPA client
client = OpaClient(host='localhost', port=8181)
# Check the OPA server connection
try:
print(client.check_connection()) # True
finally:
client.close_connection()
```
or with client factory
```python
from opa_client import create_opa_client
client = create_opa_client(host="localhost", port=8181)
```
Check OPA healthy. If you want check bundels or plugins, add query params for this.
```python
from opa_client.opa import OpaClient
client = OpaClient()
print(client.check_health()) # response is True or False
print(client.check_health({"bundle": True})) # response is True or False
# If your diagnostic url different than default url, you can provide it.
print(client.check_health(diagnostic_url="http://localhost:8282/health")) # response is True or False
print(client.check_health(query={"bundle": True}, diagnostic_url="http://localhost:8282/health")) # response is True or False
```
### Asynchronous Client Example
```python
import asyncio
from opa_client.opa_async import AsyncOpaClient
async def main():
async with AsyncOpaClient(host='localhost', port=8181) as client:
result = await client.check_connection()
print(result)
# Run the async main function
asyncio.run(main())
```
or with clien factory
```python
from opa_client import create_opa_client
client = create_opa_client(async_mode=True,host="localhost", port=8181)
```
## Secure Connection with SSL/TLS
You can use OpaClient with secure SSL/TLS connections, including mutual TLS (mTLS), by providing a client certificate and key.
### Synchronous Client with SSL/TLS
```python
from opa_client.opa import OpaClient
# Path to your certificate and private key
cert_path = '/path/to/client_cert.pem'
key_path = '/path/to/client_key.pem'
# Initialize the OPA client with SSL/TLS
client = OpaClient(
host='your-opa-server.com',
port=443, # Typically for HTTPS
ssl=True,
cert=(cert_path, key_path) # Provide the certificate and key as a tuple
)
# Check the OPA server connection
try:
result = client.check_connection()
print(result)
finally:
client.close_connection()
```
### Asynchronous Client with SSL/TLS
```python
import asyncio
from opa_client.opa_async import AsyncOpaClient
# Path to your certificate and private key
cert_path = '/path/to/client_cert.pem'
key_path = '/path/to/client_key.pem'
async def main():
# Initialize the OPA client with SSL/TLS
async with AsyncOpaClient(
host='your-opa-server.com',
port=443, # Typically for HTTPS
ssl=True,
cert=(cert_path, key_path) # Provide the certificate and key as a tuple
) as client:
# Check the OPA server connection
result = await client.check_connection()
print(result)
# Run the async main function
asyncio.run(main())
```
## Usage
### Policy Management
#### Create or Update a Policy
You can create or update a policy using the following syntax:
- **Synchronous**:
```python
policy_name = 'example_policy'
policy_content = '''
package example
default allow = false
allow {
input.user == "admin"
}
'''
client.update_policy_from_string(policy_content, policy_name)
```
- **Asynchronous**:
```python
await client.update_policy_from_string(policy_content, policy_name)
```
Or from url:
- **Synchronous**:
```python
policy_name = 'example_policy'
client.update_policy_from_url("http://opapolicyurlexample.test/example.rego", policy_name)
```
- **Asynchronous**:
```python
await client.update_policy_from_url("http://opapolicyurlexample.test/example.rego", policy_name)
```
Update policy from rego file
```python
client.update_opa_policy_fromfile("/your/path/filename.rego", endpoint="fromfile") # response is True
client.get_policies_list()
```
- **Asynchronous**:
```python
await client.update_opa_policy_fromfile("/your/path/filename.rego", endpoint="fromfile") # response is True
await client.get_policies_list()
```
#### Retrieve a Policy
After creating a policy, you can retrieve it:
- **Synchronous**:
```python
policy = client.get_policy('example_policy')
print(policy)
# or
policies = client.get_policies_list()
print(policies)
```
- **Asynchronous**:
```python
policy = await client.get_policy('example_policy')
print(policy)
# or
policies = await client.get_policies_list()
print(policies)
```
Save policy to file from OPA service
```python
client.policy_to_file(policy_name="example_policy",path="/your/path",filename="example.rego")
```
- **Asynchronous**:
```python
await client.policy_to_file(policy_name="example_policy",path="/your/path",filename="example.rego")
```
Information about policy path and rules
```python
print(client.get_policies_info())
#{'example_policy': {'path': 'http://localhost:8181/v1/data/example', 'rules': ['http://localhost:8181/v1/data/example/allow']}}
```
- **Asynchronous**:
```python
print(await client.get_policies_info())
#{'example_policy': {'path': 'http://localhost:8181/v1/data/example', 'rules': ['http://localhost:8181/v1/data/example/allow']}}
```
#### Delete a Policy
You can delete a policy by name:
- **Synchronous**:
```python
client.delete_policy('example_policy')
```
- **Asynchronous**:
```python
await client.delete_policy('example_policy')
```
### Data Management
#### Create or Update Data
You can upload arbitrary data to OPA:
- **Synchronous**:
```python
data_name = 'users'
data_content = {
"users": [
{"name": "alice", "role": "admin"},
{"name": "bob", "role": "user"}
]
}
client.update_or_create_data(data_content, data_name)
```
- **Asynchronous**:
```python
await client.update_or_create_data(data_content, data_name)
```
#### Retrieve Data
You can fetch the data stored in OPA:
- **Synchronous**:
```python
data = client.get_data('users')
print(data)
# You can use query params for additional info
# provenance - If parameter is true, response will include build/version info in addition to the result.
# metrics - Return query performance metrics in addition to result
data = client.get_data('users',query_params={"provenance": True})
print(data) # {'provenance': {'version': '0.68.0', 'build_commit': 'db53d77c482676fadd53bc67a10cf75b3d0ce00b', 'build_timestamp': '2024-08-29T15:23:19Z', 'build_hostname': '3aae2b82a15f'}, 'result': {'users': [{'name': 'alice', 'role': 'admin'}, {'name': 'bob', 'role': 'user'}]}}
data = client.get_data('users',query_params={"metrics": True})
print(data) # {'metrics': {'counter_server_query_cache_hit': 0, 'timer_rego_external_resolve_ns': 7875, 'timer_rego_input_parse_ns': 875, 'timer_rego_query_compile_ns': 501083, 'timer_rego_query_eval_ns': 50250, 'timer_rego_query_parse_ns': 199917, 'timer_server_handler_ns': 1031291}, 'result': {'users': [{'name': 'alice', 'role': 'admin'}, {'name': 'bob', 'role': 'user'}]}}
```
- **Asynchronous**:
```python
data = await client.get_data('users')
print(data)
```
#### Delete Data
To delete data from OPA:
- **Synchronous**:
```python
client.delete_data('users')
```
- **Asynchronous**:
```python
await client.delete_data('users')
```
### Policy Evaluation
#### Check Permission (Policy Evaluation)
Evaluate a rule from a known package path. This is the **recommended method** for evaluating OPA decisions.
```python
rego = """
package play
default hello = false
hello {
m := input.message
m == "world"
}
"""
check_data = {"message": "world"}
client.update_policy_from_string(rego, "test")
print(client.query_rule(input_data=check_data, package_path="play", rule_name="hello")) # {'result': True}
```
- **Asynchronous**:
```python
rego = """
package play
default hello = false
hello {
m := input.message
m == "world"
}
"""
check_data = {"message": "world"}
await client.update_policy_from_string(rego, "test")
print(await client.query_rule(input_data=check_data, package_path="play", rule_name="hello")) # {'result': True}
```
You can evaluate policies with input data using `check_permission`.
### ⚠️ Deprecated: `check_permission()`
This method introspects the policy AST to construct a query path dynamically. It introduces unnecessary overhead and is **not recommended** for production use.
- **Synchronous**:
```python
input_data = {"user": "admin"}
policy_name = 'example_policy'
rule_name = 'allow'
result = client.check_permission(input_data, policy_name, rule_name)
print(result)
```
> 🔥 Prefer `query_rule()` instead for better performance and maintainability.
### ⚠️ Deprecated: `check_permission()`
- **Asynchronous**:
```python
input_data = {"user": "admin"}
policy_name = 'example_policy'
rule_name = 'allow'
result = await client.check_permission(input_data, policy_name, rule_name)
print(result)
```
> 🔥 Prefer `query_rule()` instead for better performance and maintainability.
### Ad-hoc Queries
Execute ad-hoc queries directly:
- **Synchronous**:
```python
data = {
"user_roles": {
"alice": [
"admin"
],
"bob": [
"employee",
"billing"
],
"eve": [
"customer"
]
}
}
input_data = {"user": "admin"}
client.update_or_create_data(data, "userinfo")
result = client.ad_hoc_query(query="data.userinfo.user_roles[name]")
print(result) # {'result': [{'name': 'alice'}, {'name': 'bob'}, {'name': 'eve'}]}
```
- **Asynchronous**:
```python
data = {
"user_roles": {
"alice": [
"admin"
],
"bob": [
"employee",
"billing"
],
"eve": [
"customer"
]
}
}
input_data = {"user": "admin"}
await client.update_or_create_data(data, "userinfo")
result = await client.ad_hoc_query(query="data.userinfo.user_roles[name]")
print(result) # {'result': [{'name': 'alice'}, {'name': 'bob'}, {'name': 'eve'}]}
```
## API Reference
### Synchronous Client (OpaClient)
- `check_connection()`: Verify connection to OPA server.
- `get_policies_list()`: Get a list of all policies.
- `get_policies_info()`: Returns information about each policy, including policy path and policy rules.
- `get_policy(policy_name)`: Fetch a specific policy.
- `policy_to_file(policy_name)`: Save an OPA policy to a file..
- `update_policy_from_string(policy_content, policy_name)`: Upload or update a policy using its string content.
- `update_policy_from_url(url,endpoint)`: Update OPA policy by fetching it from a URL.
- `update_policy_from_file(filepath,endpoint)`: Update OPA policy using a policy file.
- `delete_policy(policy_name)`: Delete a specific policy.
- `update_or_create_data(data_content, data_name)`: Create or update data in OPA.
- `get_data(data_name)`: Retrieve data from OPA.
- `delete_data(data_name)`: Delete data from OPA.
- `check_permission(input_data, policy_name, rule_name)`: Evaluate a policy using input data.
- `query_rule(input_data, package_path, rule_name)`: Query a specific rule in a package.
- `ad_hoc_query(query, input_data)`: Run an ad-hoc query.
### Asynchronous Client (AsyncOpaClient)
Same as the synchronous client, but all methods are asynchronous and must be awaited.
## Contributing
Contributions are welcome! Feel free to open issues, fork the repo, and submit pull requests.
## License
This project is licensed under the MIT License.
| text/markdown | Tural Muradov | tural.muradoov@gmail.com | null | null | MIT | null | [
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Pytho... | [] | null | null | <4.0,>=3.10 | [] | [] | [] | [
"aiofiles<26.0.0,>=25.1.0",
"aiohttp[speedups]<4.0.0,>=3.13.3",
"requests<3.0.0,>=2.32.5",
"urllib3<3.0.0,>=2.6.3"
] | [] | [] | [] | [
"Homepage, https://github.com/Turall/OPA-python-client",
"Repository, https://github.com/Turall/OPA-python-client"
] | poetry/2.3.2 CPython/3.14.3 Darwin/25.3.0 | 2026-02-21T04:58:05.135832 | opa_python_client-2.0.4-py3-none-any.whl | 20,171 | 91/98/4350cf283579ef474e81ba1aa36dd5f9a0cc97d166d229207d871e91d2e5/opa_python_client-2.0.4-py3-none-any.whl | py3 | bdist_wheel | null | false | 97d1369f8e5ad660d99f64bf428b0d0f | 9aa73ae4228a31ef84d3800ab4bd9548f2d3a9e1e546d040b1fce7a284521096 | 91984350cf283579ef474e81ba1aa36dd5f9a0cc97d166d229207d871e91d2e5 | null | [
"LICENCE.md"
] | 368 |
2.4 | keras-nlp-nightly | 0.27.0.dev202602210456 | Pretrained models for Keras. | # KerasNLP: Multi-framework NLP Models
KerasNLP has renamed to KerasHub! Read the announcement
[here](https://github.com/keras-team/keras-nlp/issues/1831).
This contains a shim package for `keras-nlp` so that the old style
`pip install keras-nlp` and `import keras_nlp` continue to work.
| text/markdown | null | Keras team <keras-users@googlegroups.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: Unix",
"Operating System :: Microso... | [] | null | null | >=3.10 | [] | [] | [] | [
"keras-hub-nightly==0.27.0.dev202602210456"
] | [] | [] | [] | [
"Home, https://keras.io/keras_hub/",
"Repository, https://github.com/keras-team/keras/keras_hub"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T04:57:08.846295 | keras_nlp_nightly-0.27.0.dev202602210456.tar.gz | 1,811 | 6d/46/49f850bd28a48daf17872e8311d514aa62917646aaf1fce6e0e22662a486/keras_nlp_nightly-0.27.0.dev202602210456.tar.gz | source | sdist | null | false | b8d5cd6b99e5b8f969af2e95c09209e9 | a601f36f6318722462405a372bb91c27820bea288af137656b56f2d1de37f6c3 | 6d4649f850bd28a48daf17872e8311d514aa62917646aaf1fce6e0e22662a486 | Apache-2.0 | [] | 188 |
2.4 | keras-hub-nightly | 0.27.0.dev202602210456 | Pretrained models for Keras. | # KerasHub: Multi-framework Pretrained Models
[](https://github.com/keras-team/keras-hub/actions?query=workflow%3ATests+branch%3Amaster)

[](https://www.kaggle.com/organizations/keras/models)
[](https://github.com/keras-team/keras-hub/issues)
> [!IMPORTANT]
> 📢 KerasNLP is now KerasHub! 📢 Read
> [the announcement](https://github.com/keras-team/keras-hub/issues/1831).
**KerasHub** is a pretrained modeling library that aims to be simple, flexible,
and fast. The library provides [Keras 3](https://keras.io/keras_3/)
implementations of popular model architectures, paired with a collection of
pretrained checkpoints available on [Kaggle Models](https://www.kaggle.com/organizations/keras/models).
Models can be used with text, image, and audio data for generation, classification,
and many other built in tasks.
KerasHub is an extension of the core Keras API; KerasHub components are provided
as `Layer` and `Model` implementations. If you are familiar with Keras,
congratulations! You already understand most of KerasHub.
All models support JAX, TensorFlow, and PyTorch from a single model
definition and can be fine-tuned on GPUs and TPUs out of the box. Models can
be trained on individual accelerators with built-in PEFT techniques, or
fine-tuned at scale with model and data parallel training. See our
[Getting Started guide](https://keras.io/guides/keras_hub/getting_started)
to start learning our API.
## Quick Links
### For everyone
- [Home page](https://keras.io/keras_hub)
- [Getting started](https://keras.io/keras_hub/getting_started)
- [Guides](https://keras.io/keras_hub/guides)
- [API documentation](https://keras.io/keras_hub/api)
- [Pre-trained models](https://keras.io/keras_hub/presets/)
### For contributors
- [Call for Contributions](https://github.com/keras-team/keras-hub/issues/1835)
- [Roadmap](https://github.com/keras-team/keras-hub/issues/1836)
- [Contributing Guide](CONTRIBUTING.md)
- [Style Guide](STYLE_GUIDE.md)
- [API Design Guide](API_DESIGN_GUIDE.md)
## Quickstart
Choose a backend:
```python
import os
os.environ["KERAS_BACKEND"] = "jax" # Or "tensorflow" or "torch"!
```
Import KerasHub and other libraries:
```python
import keras
import keras_hub
import numpy as np
import tensorflow_datasets as tfds
```
Load a resnet model and use it to predict a label for an image:
```python
classifier = keras_hub.models.ImageClassifier.from_preset(
"resnet_50_imagenet",
activation="softmax",
)
url = "https://upload.wikimedia.org/wikipedia/commons/a/aa/California_quail.jpg"
path = keras.utils.get_file(origin=url)
image = keras.utils.load_img(path)
preds = classifier.predict(np.array([image]))
print(keras_hub.utils.decode_imagenet_predictions(preds))
```
Load a Bert model and fine-tune it on IMDb movie reviews:
```python
classifier = keras_hub.models.TextClassifier.from_preset(
"bert_base_en_uncased",
activation="softmax",
num_classes=2,
)
imdb_train, imdb_test = tfds.load(
"imdb_reviews",
split=["train", "test"],
as_supervised=True,
batch_size=16,
)
classifier.fit(imdb_train, validation_data=imdb_test)
preds = classifier.predict(["What an amazing movie!", "A total waste of time."])
print(preds)
```
## Installation
To install the latest KerasHub release with Keras 3, simply run:
```
pip install --upgrade keras-hub
```
To install the latest nightly changes for both KerasHub and Keras, you can use
our nightly package.
```
pip install --upgrade keras-hub-nightly
```
Currently, installing KerasHub will always pull in TensorFlow for use of the
`tf.data` API for preprocessing. When pre-processing with `tf.data`, training
can still happen on any backend.
Visit the [core Keras getting started page](https://keras.io/getting_started/)
for more information on installing Keras 3, accelerator support, and
compatibility with different frameworks.
## Configuring your backend
If you have Keras 3 installed in your environment (see installation above),
you can use KerasHub with any of JAX, TensorFlow and PyTorch. To do so, set the
`KERAS_BACKEND` environment variable. For example:
```shell
export KERAS_BACKEND=jax
```
Or in Colab, with:
```python
import os
os.environ["KERAS_BACKEND"] = "jax"
import keras_hub
```
> [!IMPORTANT]
> Make sure to set the `KERAS_BACKEND` **before** importing any Keras libraries;
> it will be used to set up Keras when it is first imported.
## Compatibility
We follow [Semantic Versioning](https://semver.org/), and plan to
provide backwards compatibility guarantees both for code and saved models built
with our components. While we continue with pre-release `0.y.z` development, we
may break compatibility at any time and APIs should not be considered stable.
## Disclaimer
KerasHub provides access to pre-trained models via the `keras_hub.models` API.
These pre-trained models are provided on an "as is" basis, without warranties
or conditions of any kind. The following underlying models are provided by third
parties, and subject to separate licenses:
BART, BLOOM, DeBERTa, DistilBERT, GPT-2, Llama, Mistral, OPT, RoBERTa, Whisper,
and XLM-RoBERTa.
## Citing KerasHub
If KerasHub helps your research, we appreciate your citations.
Here is the BibTeX entry:
```bibtex
@misc{kerashub2024,
title={KerasHub},
author={Watson, Matthew, and Chollet, Fran\c{c}ois and Sreepathihalli,
Divyashree, and Saadat, Samaneh and Sampath, Ramesh, and Rasskin, Gabriel and
and Zhu, Scott and Singh, Varun and Wood, Luke and Tan, Zhenyu and Stenbit,
Ian and Qian, Chen, and Bischof, Jonathan and others},
year={2024},
howpublished={\url{https://github.com/keras-team/keras-hub}},
}
```
## Acknowledgements
Thank you to all of our wonderful contributors!
<a href="https://github.com/keras-team/keras-hub/graphs/contributors">
<img src="https://contrib.rocks/image?repo=keras-team/keras-hub" />
</a>
| text/markdown | null | Keras team <keras-users@googlegroups.com> | null | null | null | null | [
"Development Status :: 3 - Alpha",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3 :: Only",
"Operating System :: Unix",
"Operating System :: Microsoft :: Windows",
"Operating System :: MacO... | [] | null | null | >=3.11 | [] | [] | [] | [
"keras>=3.13",
"absl-py",
"numpy",
"packaging",
"regex",
"rich",
"kagglehub",
"tensorflow-text; platform_system != \"Windows\""
] | [] | [] | [] | [
"Home, https://keras.io/keras_hub/",
"Repository, https://github.com/keras-team/keras/keras_hub"
] | twine/6.1.0 CPython/3.13.7 | 2026-02-21T04:57:04.562288 | keras_hub_nightly-0.27.0.dev202602210456.tar.gz | 898,204 | b6/42/b72035aec1db8b57760047f8be2ac7781372f061a3c76ebc4b887fd20256/keras_hub_nightly-0.27.0.dev202602210456.tar.gz | source | sdist | null | false | 1cf8e48d295b07cb4acf2266972d4ded | b0b0fe2f4192fd08e679b95da3e943c3cc53aa36c9f1b4cfa67fb49bbd8e8dd6 | b642b72035aec1db8b57760047f8be2ac7781372f061a3c76ebc4b887fd20256 | Apache-2.0 | [] | 193 |
2.4 | pyconnora | 1.0.2 | A simplified Python library for ORACLE sql operations. | # Oracle Database Connection Utility
A lightweight Python utility class for connecting to **Oracle databases** using `cx_Oracle`, with built-in support for environment variables and `.env` configuration.
---
## 📘 Overview
The `Connect` class provides a simple interface to:
- Establish a connection to an Oracle database.
- Execute parameterized queries securely.
- Fetch data from specific tables or columns.
- Automatically load credentials from a `.env` file or environment variables.
This helps you avoid hardcoding credentials and ensures safer, cleaner database access in Python.
---
## ⚙️ Features
- ✅ Automatic `.env` file loading based on OS (Windows/Linux).
- ✅ Fallback to environment variables (`ORA_HOST`, `ORA_USER`, etc.).
- ✅ Secure parameter binding (prevents SQL injection).
- ✅ Simple API for fetching data (WIP).
- ✅ Easy cleanup with `close()`.
---
## 📦 Requirements
- **Python** 3.10 or later
- **Oracle Instant Client** installed and accessible
- **cx_Oracle** Python package
- **python-dotenv** for `.env` management
| text/markdown | null | Josephus <orca099210@gmail.com> | null | null | null | oracle, database, connection | [] | [] | null | null | >=3.12 | [] | [] | [] | [
"oracledb",
"dotenv"
] | [] | [] | [] | [] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T04:57:01.985643 | pyconnora-1.0.2.tar.gz | 2,597 | c4/9b/a96640c9d55314f8b9109a80911c41d1836d6fb3fb4f79e7e5913fb8de5f/pyconnora-1.0.2.tar.gz | source | sdist | null | false | 4ecee7aa2ee68a20dcc8f62a653fc6bf | 68a225c0311ef59e68906f84cb87e05aa2a7e88c12d390b7b934503cbdd31996 | c49ba96640c9d55314f8b9109a80911c41d1836d6fb3fb4f79e7e5913fb8de5f | MIT | [] | 193 |
2.4 | xcode-mcp-server | 1.3.6 | Drew's MCP server for Xcode integration | # Xcode MCP Server
[](https://pypi.org/project/xcode-mcp-server/)
[](https://pypi.org/project/xcode-mcp-server/)
[](https://pepy.tech/project/xcode-mcp-server)
[](https://modelcontextprotocol.io)
[](https://www.apple.com/macos/)
[](https://developer.apple.com/xcode/)
[](https://opensource.org/licenses/MIT)
[](https://github.com/drewster99/xcode-mcp-server/commits)
An MCP (Model Context Protocol) server that enables AI assistants to control and interact with Xcode for Apple platform development.
## What It Does
This server allows AI assistants (like Claude, Cursor, or other MCP clients) to:
- **Discover and navigate** your Xcode projects and source files
- **Build and run** iOS, macOS, tvOS, and watchOS applications
- **Execute and monitor tests** with detailed results
- **Debug build failures** by retrieving errors and warnings
- **Capture console output** from running applications
- **Take screenshots** of Xcode windows and iOS simulators
- **Manage simulators** and view their status
The AI can perform complete development workflows - from finding a project, to building it, running tests, debugging failures, and capturing results.
## Requirements
- **macOS** - This server only works on macOS
- **Xcode** - Xcode must be installed
- **Python 3.8+** - For running the server
## Security
The server implements path-based security to control which directories are accessible:
- **With restrictions:** Set `XCODEMCP_ALLOWED_FOLDERS=/path1:/path2:/path3` to limit access to specific directories
- **Default:** If not specified, allows access to your home directory (`$HOME`)
Security requirements:
- All paths must be absolute (starting with `/`)
- No `..` path components allowed
- All paths must exist and be directories
## Setup
First, ensure `uv` is installed (required for all methods below):
```bash
which uv || brew install uv
```
### 1. Claude Code (Recommended)
```bash
claude mcp add --scope user --transport stdio -- xcode-mcp-server `which uvx` xcode-mcp-server
```
To run a specific version, use:
```bash
# Example: How to run v1.3.0b6
claude mcp add --scope user --transport stdio -- xcode-mcp-server `which uvx` xcode-mcp-server==1.3.0b6
```
That's it! Claude Code handles the rest automatically.
### 2. Claude Desktop
Edit your Claude Desktop config file (`~/Library/Application Support/Claude/claude_desktop_config.json`):
```json
{
"mcpServers": {
"xcode-mcp-server": {
"command": "uvx",
"args": [
"xcode-mcp-server"
]
}
}
}
```
If you'd like to allow only certain projects or folders to be accessible by xcode-mcp-server, add the `env` option, with a colon-separated list of absolute folder paths, like this:
```json
{
"mcpServers": {
"xcode-mcp-server": {
"command": "uvx",
"args": [
"xcode-mcp-server"
],
"env": {
"XCODEMCP_ALLOWED_FOLDERS": "/Users/andrew/my_project:/Users/andrew/Documents/source"
}
}
}
}
```
### 3. Cursor AI
In Cursor: Settings → Tools & Integrations → + New MCP Server
Or edit `~/.cursor/mcp.json` directly:
```json
{
"mcpServers": {
"xcode-mcp-server": {
"command": "uvx",
"args": ["xcode-mcp-server"]
}
}
}
```
**Optional:** Add folder restrictions with an `env` section (same format as Claude Desktop above).
## Usage
Once configured, simply ask your AI assistant to help with Xcode tasks:
- "Find all Xcode projects in my home directory"
- "Build the project at /path/to/MyProject.xcodeproj"
- "Run tests for this project and show me any failures"
- "What are the build errors in this project?"
- "Show me the directory structure of this project"
- "Take a screenshot of the Xcode window"
Most tools work with paths to `.xcodeproj` or `.xcworkspace` files, or with regular directory paths for browsing and navigation.
## Advanced Configuration
### Command Line Arguments
When running the server directly (for development or custom setups), these options are available:
**Build output control:**
- `--no-build-warnings` - Show only errors, exclude warnings
- `--always-include-build-warnings` - Always show warnings (default)
**Notifications:**
- `--show-notifications` - Enable macOS notifications for operations
- `--hide-notifications` - Disable notifications (default)
**Access control:**
- `--allowed /path` - Add allowed folder (can be repeated)
Example:
```bash
xcode-mcp-server --no-build-warnings --show-notifications --allowed ~/Projects
```
**Note:** When using MCP clients (Claude, Cursor), configure these via the `env` section in your client's config file instead.
## Development
The server is built with FastMCP and uses AppleScript to communicate with Xcode.
### Local Testing
Test with MCP Inspector:
```bash
export XCODEMCP_ALLOWED_FOLDERS=~/Projects
mcp dev xcode_mcp_server/__main__.py
```
This opens an inspector interface where you can test tools directly. Provide paths as quoted strings: `"/Users/you/Projects/MyApp.xcodeproj"`
## Limitations
- AppleScript syntax may need adjustments for specific Xcode versions
- Some operations require the project to be open in Xcode first
| text/markdown | null | Andrew Benson <db@nuclearcyborg.com> | null | null | MIT | mcp, server, xcode | [
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Py... | [] | null | null | >=3.8 | [] | [] | [] | [
"mcp[cli]>=1.2.0",
"questionary>=2.0.0",
"rich>=13.0.0"
] | [] | [] | [] | [
"Homepage, https://github.com/drewster99/xcode-mcp-server",
"Repository, https://github.com/drewster99/xcode-mcp-server",
"Issues, https://github.com/drewster99/xcode-mcp-server/issues"
] | twine/6.2.0 CPython/3.14.2 | 2026-02-21T04:56:41.275898 | xcode_mcp_server-1.3.6.tar.gz | 99,206 | 72/f4/eb5a47570d85c46f3b93ee841ca6c2352a652bbdf7be5c83607bd2f66523/xcode_mcp_server-1.3.6.tar.gz | source | sdist | null | false | dad2f8a85bb59da8b8bf404c156a2d5c | 93b30a542baa31bfae764363d70184bd9567bd06841c8f68b9c716b3d6a512d6 | 72f4eb5a47570d85c46f3b93ee841ca6c2352a652bbdf7be5c83607bd2f66523 | null | [
"LICENSE"
] | 240 |
2.3 | squawk-cli | 2.41.0 | Linter for PostgreSQL migrations | # squawk [](https://www.npmjs.com/package/squawk-cli)
> Linter for Postgres migrations & SQL
[Quick Start](https://squawkhq.com/docs/) | [Playground](https://play.squawkhq.com) | [Rules Documentation](https://squawkhq.com/docs/rules) | [GitHub Action](https://github.com/sbdchd/squawk-action) | [DIY GitHub Integration](https://squawkhq.com/docs/github_app)
## Why?
Prevent unexpected downtime caused by database migrations and encourage best
practices around Postgres schemas and SQL.
## Install
```shell
npm install -g squawk-cli
# or via PYPI
pip install squawk-cli
# or install binaries directly via the releases page
https://github.com/sbdchd/squawk/releases
```
### Or via Docker
You can also run Squawk using Docker. The official image is available on GitHub Container Registry.
```shell
# Assuming you want to check sql files in the current directory
docker run --rm -v $(pwd):/data ghcr.io/sbdchd/squawk:latest *.sql
```
### Or via the Playground
Use the WASM powered playground to check your SQL locally in the browser!
<https://play.squawkhq.com>
### Or via VSCode
<https://marketplace.visualstudio.com/items?itemName=sbdchd.squawk>
## Usage
```shell
❯ squawk example.sql
warning[prefer-bigint-over-int]: Using 32-bit integer fields can result in hitting the max `int` limit.
╭▸ example.sql:6:10
│
6 │ "id" serial NOT NULL PRIMARY KEY,
│ ━━━━━━
│
├ help: Use 64-bit integer values instead to prevent hitting this limit.
╭╴
6 │ "id" bigserial NOT NULL PRIMARY KEY,
╰╴ +++
warning[prefer-identity]: Serial types make schema, dependency, and permission management difficult.
╭▸ example.sql:6:10
│
6 │ "id" serial NOT NULL PRIMARY KEY,
│ ━━━━━━
│
├ help: Use an `IDENTITY` column instead.
╭╴
6 - "id" serial NOT NULL PRIMARY KEY,
6 + "id" integer generated by default as identity NOT NULL PRIMARY KEY,
╰╴
warning[prefer-text-field]: Changing the size of a `varchar` field requires an `ACCESS EXCLUSIVE` lock, that will prevent all reads and writes to the table.
╭▸ example.sql:7:13
│
7 │ "alpha" varchar(100) NOT NULL
│ ━━━━━━━━━━━━
│
├ help: Use a `TEXT` field with a `CHECK` constraint.
╭╴
7 - "alpha" varchar(100) NOT NULL
7 + "alpha" text NOT NULL
╰╴
warning[require-concurrent-index-creation]: During normal index creation, table updates are blocked, but reads are still allowed.
╭▸ example.sql:10:1
│
10 │ CREATE INDEX "field_name_idx" ON "table_name" ("field_name");
│ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
│
├ help: Use `concurrently` to avoid blocking writes.
╭╴
10 │ CREATE INDEX concurrently "field_name_idx" ON "table_name" ("field_name");
╰╴ ++++++++++++
warning[constraint-missing-not-valid]: By default new constraints require a table scan and block writes to the table while that scan occurs.
╭▸ example.sql:12:24
│
12 │ ALTER TABLE table_name ADD CONSTRAINT field_name_constraint UNIQUE (field_name);
│ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
│
╰ help: Use `NOT VALID` with a later `VALIDATE CONSTRAINT` call.
warning[disallowed-unique-constraint]: Adding a `UNIQUE` constraint requires an `ACCESS EXCLUSIVE` lock which blocks reads and writes to the table while the index is built.
╭▸ example.sql:12:28
│
12 │ ALTER TABLE table_name ADD CONSTRAINT field_name_constraint UNIQUE (field_name);
│ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
│
╰ help: Create an index `CONCURRENTLY` and create the constraint using the index.
Find detailed examples and solutions for each rule at https://squawkhq.com/docs/rules
Found 6 issues in 1 file (checked 1 source file)
```
### `squawk --help`
```
squawk
Find problems in your SQL
USAGE:
squawk [FLAGS] [OPTIONS] [path]... [SUBCOMMAND]
FLAGS:
--assume-in-transaction
Assume that a transaction will wrap each SQL file when run by a migration tool
Use --no-assume-in-transaction to override this setting in any config file that exists
-h, --help
Prints help information
-V, --version
Prints version information
--verbose
Enable debug logging output
OPTIONS:
-c, --config <config-path>
Path to the squawk config file (.squawk.toml)
--debug <format>
Output debug info [possible values: Lex, Parse]
--exclude-path <excluded-path>...
Paths to exclude
For example: --exclude-path=005_user_ids.sql --exclude-path=009_account_emails.sql
--exclude-path='*user_ids.sql'
-e, --exclude <rule>...
Exclude specific warnings
For example: --exclude=require-concurrent-index-creation,ban-drop-database
--pg-version <pg-version>
Specify postgres version
For example: --pg-version=13.0
--reporter <reporter>
Style of error reporting [possible values: Tty, Gcc, Json]
--stdin-filepath <filepath>
Path to use in reporting for stdin
ARGS:
<path>...
Paths to search
SUBCOMMANDS:
help Prints this message or the help of the given subcommand(s)
upload-to-github Comment on a PR with Squawk's results
```
## Rules
Individual rules can be disabled via the `--exclude` flag
```shell
squawk --exclude=adding-field-with-default,disallowed-unique-constraint example.sql
```
### Disabling rules via comments
Rule violations can be ignored via the `squawk-ignore` comment:
```sql
-- squawk-ignore ban-drop-column
alter table t drop column c cascade;
```
You can also ignore multiple rules by making a comma seperated list:
```sql
-- squawk-ignore ban-drop-column, renaming-column,ban-drop-database
alter table t drop column c cascade;
```
To ignore a rule for the entire file, use `squawk-ignore-file`:
```sql
-- squawk-ignore-file ban-drop-column
alter table t drop column c cascade;
-- also ignored!
alter table t drop column d cascade;
```
Or leave off the rule names to ignore all rules for the file
```sql
-- squawk-ignore-file
alter table t drop column c cascade;
create table t (a int);
```
### Configuration file
Rules can also be disabled with a configuration file.
By default, Squawk will traverse up from the current directory to find a `.squawk.toml` configuration file. You may specify a custom path with the `-c` or `--config` flag.
```shell
squawk --config=~/.squawk.toml example.sql
```
The `--exclude` flag will always be prioritized over the configuration file.
**Example `.squawk.toml`**
```toml
excluded_rules = [
"require-concurrent-index-creation",
"require-concurrent-index-deletion",
]
```
See the [Squawk website](https://squawkhq.com/docs/rules) for documentation on each rule with examples and reasoning.
## Bot Setup
Squawk works as a CLI tool but can also create comments on GitHub Pull
Requests using the `upload-to-github` subcommand.
Here's an example comment created by `squawk` using the `example.sql` in the repo:
<https://github.com/sbdchd/squawk/pull/14#issuecomment-647009446>
See the ["GitHub Integration" docs](https://squawkhq.com/docs/github_app) for more information.
## `pre-commit` hook
Integrate Squawk into Git workflow with [pre-commit](https://pre-commit.com/). Add the following
to your project's `.pre-commit-config.yaml`:
```yaml
repos:
- repo: https://github.com/sbdchd/squawk
rev: 2.41.0
hooks:
- id: squawk
files: path/to/postgres/migrations/written/in/sql
```
Note the `files` parameter as it specifies the location of the files to be linted.
## Prior Art / Related
- <https://github.com/erik/squabble>
- <https://github.com/yandex/zero-downtime-migrations>
- <https://github.com/tbicr/django-pg-zero-downtime-migrations>
- <https://github.com/3YOURMIND/django-migration-linter>
- <https://github.com/ankane/strong_migrations>
- <https://github.com/AdmTal/PostgreSQL-Query-Lock-Explainer>
- <https://github.com/stripe/pg-schema-diff>
- <https://github.com/kristiandupont/schemalint>
- <https://github.com/supabase-community/postgres-language-server>
- <https://github.com/premium-minds/sonar-postgres-plugin>
- <https://engineering.fb.com/2022/11/30/data-infrastructure/static-analysis-sql-queries/>
- <https://github.com/xNaCly/sqleibniz>
- <https://github.com/sqlfluff/sqlfluff>
- <https://atlasgo.io/lint/analyzers>
- <https://github.com/tobymao/sqlglot>
- <https://github.com/paupino/pg_parse>
- <https://github.com/sql-formatter-org/sql-formatter>
- <https://github.com/darold/pgFormatter>
- <https://github.com/sqls-server/sqls>
- <https://github.com/joe-re/sql-language-server>
- <https://github.com/nene/sql-parser-cst>
- <https://github.com/nene/prettier-plugin-sql-cst>
- <https://www.sqlstyle.guide>
- <https://github.com/ivank/potygen>
## Related Blog Posts / SE Posts / PG Docs
- <https://www.braintreepayments.com/blog/safe-operations-for-high-volume-postgresql/>
- <https://gocardless.com/blog/zero-downtime-postgres-migrations-the-hard-parts/>
- <https://www.citusdata.com/blog/2018/02/22/seven-tips-for-dealing-with-postgres-locks/>
- <https://realpython.com/create-django-index-without-downtime/#non-atomic-migrations>
- <https://dba.stackexchange.com/questions/158499/postgres-how-is-set-not-null-more-efficient-than-check-constraint>
- <https://www.postgresql.org/docs/10/sql-altertable.html#SQL-ALTERTABLE-NOTES>
- <https://www.postgresql.org/docs/current/explicit-locking.html>
- <https://benchling.engineering/move-fast-and-migrate-things-how-we-automated-migrations-in-postgres-d60aba0fc3d4>
- <https://medium.com/paypal-tech/postgresql-at-scale-database-schema-changes-without-downtime-20d3749ed680>
## Dev
```shell
cargo install
cargo run
./s/test
./s/lint
./s/fmt
```
... or with nix:
```
$ nix develop
[nix-shell]$ cargo run
[nix-shell]$ cargo insta review
[nix-shell]$ ./s/test
[nix-shell]$ ./s/lint
[nix-shell]$ ./s/fmt
```
### Adding a New Rule
When adding a new rule, running `cargo xtask new-rule` will create stubs for your rule in the Rust crate and in Documentation site.
```bash
cargo xtask new-rule 'prefer big serial'
```
### Releasing a New Version
1. Run `s/update-version`
```bash
# update version in squawk/Cargo.toml, package.json, flake.nix to 4.5.3
s/update-version 4.5.3
```
2. Update the `CHANGELOG.md`
Include a description of any fixes / additions. Make sure to include the PR numbers and credit the authors.
3. Create a new release on GitHub
Use the text and version from the `CHANGELOG.md`
### Algolia
The squawkhq.com Algolia index can be found on [the crawler website](https://crawler.algolia.com/admin/crawlers/9bf0dffb-bc5a-4d46-9b8d-2f1197285213/overview). Algolia reindexes the site every day at 5:30 (UTC).
## How it Works
Squawk uses its parser (based on rust-analyzer's parser) to create a CST. The
linters then use an AST layered on top of the CST to navigate and record
warnings, which are then pretty printed!
| text/markdown; charset=UTF-8; variant=GFM | Squawk Team & Contributors | null | null | null | Apache-2.0 OR MIT | postgres, postgresql, linter | [
"License :: OSI Approved :: Apache Software License",
"License :: OSI Approved :: MIT License",
"Operating System :: MacOS",
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Rust",
"Programming Language ::... | [] | https://squawkhq.com | null | >=3.8 | [] | [] | [] | [] | [] | [] | [] | [
"Source Code, https://github.com/sbdchd/squawk"
] | maturin/1.12.3 | 2026-02-21T04:56:31.599719 | squawk_cli-2.41.0-py3-none-manylinux_2_28_x86_64.whl | 6,408,719 | 63/29/3ea608aa55beb222e130308abd768bef17c707ee62706576947fc9f01e67/squawk_cli-2.41.0-py3-none-manylinux_2_28_x86_64.whl | py3 | bdist_wheel | null | false | 114e9bf26990949c9376c25f6afb11ca | ebf2eca5ebdabd5e8fa04b8f3576cc3ff34dce697414f8697b01b4979c356096 | 63293ea608aa55beb222e130308abd768bef17c707ee62706576947fc9f01e67 | null | [] | 574 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.