A newer version of the Gradio SDK is available: 6.14.0
Sequential Thinking with Chat Logging
This Gradio application now includes automatic chat logging for all thoughts and thinking processes.
Chat Logging Features
Local Logging (Always Enabled)
All thought records are automatically logged to a local JSONL file at logs/chat_logs.jsonl:
{
"session_id": "abc123xyz789...",
"model_name": "sequential-thinking",
"thought": "Let me break down this problem step by step...",
"thought_number": 1,
"total_thoughts": 5,
"metadata": {
"is_revision": false,
"revises_thought": null,
"branch_from_thought": null,
"branch_id": null,
"needs_more_thoughts": false,
"next_thought_needed": true
},
"timestamp": "2024-01-15T10:30:00.123456"
}
Accessing Local Logs
Local logs are written asynchronously to logs/chat_logs.jsonl. Each line is a complete JSON object representing one thought entry.
HuggingFace Dataset Upload (Optional)
If you have HuggingFace Hub credentials configured, logs are automatically uploaded periodically:
Configuration
Set these environment variables:
export HF_TOKEN=hf_xxxxxxxxxxxxx
export HF_DATASET_REPO=username/my-sequential-thinking-logs
Upload Behavior
- Interval: Every 5 minutes (configurable in
chat_logger.py) - Format: Timestamped files named
logs/chat_logs_YYYYMMDD_HHMMSS.jsonl - Cleanup: Local file is cleared after successful upload
- Fallback: If upload fails, logs remain locally and retry on next interval
Creating a HuggingFace Dataset
If the dataset doesn't exist, the logger creates it automatically on first upload (must be private).
Running the App
Standard Launch
python app.py
The app will:
- Start the Gradio interface at
http://localhost:7860 - Begin logging all thoughts to
logs/chat_logs.jsonl - Upload to HuggingFace (if configured) every 5 minutes
- Flush remaining logs on shutdown
Environment Variables
| Variable | Default | Description |
|---|---|---|
HF_TOKEN |
- | HuggingFace API token (optional) |
HF_DATASET_REPO |
- | HuggingFace dataset repo ID (optional) |
DISABLE_THOUGHT_LOGGING |
false |
Suppress console output of thoughts |
Disabling Console Output
DISABLE_THOUGHT_LOGGING=true python app.py
Logs will still be saved locally; only console printing is suppressed.
MCP Server
The app also exposes an MCP (Model Context Protocol) endpoint:
SSE Endpoint: http://localhost:7860/gradio_api/mcp/sse
Configure in your MCP client:
{
"mcpServers": {
"sequential-thinking": {
"url": "http://localhost:7860/gradio_api/mcp/sse"
}
}
}
Log Data Structure
Each logged thought contains:
- session_id: Unique identifier for the current thinking session
- model_name: Always
"sequential-thinking"for this app - thought: The actual thought text
- thought_number: Current step number
- total_thoughts: Estimated total steps
- metadata: Additional context including revision/branch info
- timestamp: ISO 8601 timestamp
Sessions
A new session ID is generated:
- When the app starts
- When "Reset Session" is clicked in the UI
All thoughts in a session share the same session_id, making it easy to group related thinking processes.
Data Privacy
- Local logs are stored in
logs/chat_logs.jsonlon your machine - Only uploaded to HuggingFace if credentials are configured
- Uploaded datasets are set to private by default
- Logs are cleared from local storage after successful HuggingFace upload
Troubleshooting
Logs Not Appearing
- Check that
logs/directory exists:ls logs/ - Verify JSONL file:
tail -f logs/chat_logs.jsonl - Check console for any error messages
HuggingFace Upload Issues
- Verify
HF_TOKENis valid:huggingface-cli whoami - Ensure
HF_DATASET_REPOfollows format:username/repo-name - Check internet connection
- Logs will retry on next interval; check for error messages in console
Performance
Logging runs in background threads and has minimal impact on performance:
- Queue-based async processing
- Non-blocking JSONL writes
- Periodic batch uploads to HuggingFace