Spaces:
Sleeping
title: Participatory Planning Application
emoji: ποΈ
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
Participatory Planning Application
An AI-powered collaborative urban planning platform for multi-stakeholder engagement sessions with advanced sentence-level categorization and fine-tuning capabilities.
Author & Copyright
Created by: Marcos Thadeu Queiroz MagalhΓ£es π§ thadillo@gmail.com
Copyright Β© 2024-2025 Marcos Thadeu Queiroz MagalhΓ£es. All rights reserved.
This software must be cited and credited to Marcos Thadeu Queiroz MagalhΓ£es whenever used or employed in any capacity.
Developed using Claude Code for AI-enhanced development.
Features
Core Features
- π― Token-based access - Self-service registration for participants
- π€ AI categorization - Automatic classification using BART zero-shot models (free & offline)
- π Sentence-level analysis - Each sentence categorized independently for multi-topic submissions
- πΊοΈ Geographic mapping - Interactive visualization of geotagged contributions
- π Analytics dashboard - Real-time charts with submission and sentence-level aggregation
- πΎ Session management - Export/import for pause/resume workflows
- π₯ Multi-stakeholder - Government, Community, Industry, NGO, Academic, Other
Advanced AI Features
- π§ Model Fine-tuning - Train custom models with LoRA or head-only methods
- π Real-time training progress - Detailed epoch/step/loss tracking during training
- π Training data management - Export, import, and clear training examples
- ποΈ Multiple training modes - Head-only (fast, <100 examples) or LoRA (better, >100 examples)
- π¦ Model deployment - Deploy fine-tuned models with one click
- ποΈ Force delete - Remove stuck or problematic training runs
Sentence-Level Categorization
- βοΈ Smart segmentation - Handles abbreviations, bullet points, and complex punctuation
- π― Independent classification - Each sentence gets its own category
- π Category distribution - View breakdown of categories within submissions
- π Backward compatible - Falls back to submission-level for legacy data
- βοΈ Sentence editing - Edit individual sentence categories in UI
Categories
The system classifies text into six strategic planning categories:
- Vision - Long-term aspirational goals and ideal future states
- Problem - Current issues, challenges, and gaps
- Objectives - Specific, measurable goals and targets
- Directives - High-level mandates and policy directions
- Values - Guiding principles and community priorities
- Actions - Concrete implementation steps and projects
Quick Start
Basic Setup
- Access the application at
http://localhost:5000 - Login with admin token:
<your-admin-token-from-startup> - Go to Registration to get the participant signup link
- Share the link with stakeholders
- Collect submissions and analyze with AI
Sentence-Level Analysis Workflow
- Collect Submissions - Participants submit via web form
- Run Analysis - Click "Analyze All" in Admin β Submissions
- Review Sentences - Click "View Sentences" on any submission
- Correct Categories - Edit sentence categories as needed (creates training data)
- Train Model - Once you have 20+ sentence corrections, train a custom model
- Deploy Model - Activate your fine-tuned model for better accuracy
Default Login
- Admin Token:
<your-admin-token-from-startup> - Admin Access: Full dashboard, analytics, moderation, AI training
Tech Stack
- Backend: Flask (Python web framework)
- Database: SQLite with sentence-level schema
- AI Models:
- BART-large-MNLI (default, 400M parameters)
- DeBERTa-v3-base-MNLI (fast, 86M parameters)
- DistilBART-MNLI (balanced, 134M parameters)
- Fine-tuning: LoRA (Low-Rank Adaptation) with PEFT
- Frontend: Bootstrap 5, Leaflet.js, Chart.js
- Deployment: Docker support
AI Training
Training Data Management
Export Training Examples
- Download all training data as JSON
- Option to export only sentence-level examples
- Use for backups or sharing datasets
Import Training Examples
- Load training data from JSON files
- Automatically skips duplicates
- Useful for migrating between environments
Clear Training Examples
- Remove unused examples to clean up
- Option to clear only sentence-level data
- Safe defaults prevent accidental deletion
Training Modes
Head-Only Training (Recommended for <100 examples)
- Faster training (2-5 minutes)
- Lower memory usage
- Good for small datasets
- Only trains classification layer
LoRA Fine-tuning (Recommended for >100 examples)
- Better accuracy on larger datasets
- Parameter-efficient (trains adapter layers)
- Configurable rank, alpha, dropout
- Takes 5-15 minutes depending on data size
Progress Tracking
During training, you'll see:
- Current epoch / total epochs
- Current step / total steps
- Real-time loss values
- Precise progress percentage
- Estimated time remaining
Model Management
- Deploy models with one click
- Rollback to base model anytime
- Export trained models as ZIP files
- Force delete stuck or failed runs
- View detailed training metrics
Demo Data
The app starts empty. You can:
- Generate tokens for test users
- Submit sample contributions (multi-sentence for best results)
- Run AI sentence-level analysis
- Correct sentence categories to build training data
- Train a custom fine-tuned model
- View analytics in submission or sentence mode
File Structure
participatory_planner/
βββ app/
β βββ analyzer.py # AI classification engine
β βββ sentence_segmenter.py # Sentence splitting logic
β βββ models/
β β βββ models.py # Database models (Submission, SubmissionSentence, etc.)
β βββ routes/
β β βββ admin.py # Admin dashboard and API endpoints
β β βββ main.py # Public submission forms
β βββ fine_tuning/
β β βββ trainer.py # LoRA fine-tuning engine
β β βββ model_manager.py # Model deployment/rollback
β βββ templates/
β βββ admin/
β β βββ submissions.html # Sentence-level UI
β β βββ dashboard.html # Analytics with dual modes
β β βββ training.html # Fine-tuning interface
β βββ submit.html # Public submission form
βββ migrations/
β βββ migrate_to_sentence_level.py
βββ models/
β βββ finetuned/ # Trained model checkpoints
β βββ zero_shot/ # Base BART models
βββ data/
β βββ app.db # SQLite database
βββ README.md
Environment Variables
SECRET_KEY=your-secret-key-here
MODELS_DIR=models/finetuned
ZERO_SHOT_MODELS_DIR=models/zero_shot
API Endpoints
Public
POST /submit- Submit new contributionGET /register/:token- Participant registration
Admin (requires auth)
POST /admin/api/analyze- Analyze submissions with sentencesPOST /admin/api/update-sentence-category/:id- Edit sentence categoryGET /admin/api/export-training-examples- Export training dataPOST /admin/api/import-training-examples- Import training dataPOST /admin/api/clear-training-examples- Clear training dataPOST /admin/api/start-fine-tuning- Start model trainingGET /admin/api/training-status/:id- Get training progressPOST /admin/api/deploy-model/:id- Deploy fine-tuned modelDELETE /admin/api/force-delete-training-run/:id- Force delete run
Database Schema
Key Tables
submissions
- Core submission data
sentence_analysis_doneflag for tracking- Backward compatible with old category field
submission_sentences
- Individual sentences from submissions
- Each sentence has its own category
- Linked to parent submission via foreign key
training_examples
- Admin corrections for fine-tuning
- Supports both sentence and submission-level
- Tracks usage in training runs
fine_tuning_runs
- Training job metadata and results
- Real-time progress tracking fields
- Model paths and deployment status
Troubleshooting
Training stuck at 0% progress?
- Check if CUDA is available or forcing CPU mode
- Reduce batch size if out of memory
- Check training logs for errors
Sentences not being categorized?
- Run database migration:
python migrations/migrate_to_sentence_level.py - Ensure
sentence_analysis_donecolumn exists - Check that sentence segmenter is working
Can't delete training run?
- Use "Force Delete" button for active/training runs
- Type "DELETE" to confirm force deletion
- Check model files aren't locked
License
MIT License - See LICENSE file for full details.
Copyright Β© 2024-2025 Marcos Thadeu Queiroz MagalhΓ£es (thadillo@gmail.com)
Citation Requirement
Any use, modification, or distribution of this software must include proper attribution to:
Marcos Thadeu Queiroz MagalhΓ£es (thadillo@gmail.com)
This includes:
- Source code and documentation
- Academic publications or presentations
- Derivative works and commercial applications
- User interfaces and credits sections
Contributing
Contributions welcome! Please:
- Fork the repository
- Create a feature branch
- Maintain attribution to the original author in all contributions
- Submit a pull request with clear description
Support
For issues or questions:
- Check existing documentation files
- Review troubleshooting section above
- Contact: thadillo@gmail.com
- Open an issue with detailed description