Spaces:
Sleeping
Sleeping
Commit
·
a7a6ad0
1
Parent(s):
0c10583
Refactor infra: update n8n, simplify Docker, add knowledge sync
Browse filesUpgrades n8n to 1.108.2 and simplifies Dockerfile and docker-compose.yml for local and Hugging Face Spaces deployment. Adds scripts for knowledge base sync with Supabase and OpenAI embeddings, and provides a Supabase schema for documents and embeddings. Updates backup/restore scripts for direct Postgres and n8n API usage. Enhances README with ChromaDB setup and clarifies instructions. Adds a knowledge sync workflow blueprint and VSCode spell checker settings.
- .vscode/settings.json +8 -1
- README.md +53 -16
- docker/Dockerfile +38 -55
- docker/docker-compose.yml +23 -91
- package.json +11 -11
- scripts/backup.sh +17 -192
- scripts/restore.sh +12 -202
- scripts/sync-knowledge.mjs +95 -0
- supabase/schema.sql +32 -0
- workflows/examples/knowledge-sync-blueprint.json +76 -0
.vscode/settings.json
CHANGED
|
@@ -1,3 +1,10 @@
|
|
| 1 |
{
|
| 2 |
-
"DockerRun.DisableDockerrc": true
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"DockerRun.DisableDockerrc": true,
|
| 3 |
+
"spellright.language": [],
|
| 4 |
+
"spellright.documentTypes": ["markdown", "latex", "plaintext"],
|
| 5 |
+
"spellright.parserByClass": {
|
| 6 |
+
"code-runner-output": {
|
| 7 |
+
"parser": "markdown"
|
| 8 |
+
}
|
| 9 |
+
}
|
| 10 |
}
|
README.md
CHANGED
|
@@ -5,18 +5,21 @@ A comprehensive, production-ready infrastructure setup for deploying n8n automat
|
|
| 5 |
## 🚀 Features
|
| 6 |
|
| 7 |
### Core Platform
|
|
|
|
| 8 |
- **n8n v1.17.1**: Self-hosted workflow automation platform
|
| 9 |
- **Hugging Face Spaces**: Docker-based deployment with automatic scaling
|
| 10 |
- **Supabase PostgreSQL**: SSL-encrypted database with pgvector extension
|
| 11 |
- **ChromaDB**: Vector store for embeddings and AI-powered search
|
| 12 |
|
| 13 |
### AI & Automation
|
|
|
|
| 14 |
- **LangChain Integration**: Advanced AI workflow capabilities
|
| 15 |
- **Multi-Model Support**: OpenAI GPT, Anthropic Claude, Google Vertex AI
|
| 16 |
- **Vector Knowledge Base**: Automated content ingestion with embeddings
|
| 17 |
- **Community Nodes**: Extended functionality with custom AI nodes
|
| 18 |
|
| 19 |
### DevOps & Monitoring
|
|
|
|
| 20 |
- **GitHub Actions CI/CD**: Automated deployment and maintenance
|
| 21 |
- **Automated Backups**: Daily workflow and configuration backups
|
| 22 |
- **Knowledge Sync**: Multi-repository content synchronization
|
|
@@ -100,13 +103,14 @@ gh workflow run deploy-to-hf.yml
|
|
| 100 |
### Supabase Configuration
|
| 101 |
|
| 102 |
1. **Create Supabase Project**:
|
|
|
|
| 103 |
```sql
|
| 104 |
-- Enable pgvector extension
|
| 105 |
CREATE EXTENSION IF NOT EXISTS vector;
|
| 106 |
-
|
| 107 |
-- Create knowledge base schema
|
| 108 |
CREATE SCHEMA IF NOT EXISTS knowledge;
|
| 109 |
-
|
| 110 |
-- Create embeddings table
|
| 111 |
CREATE TABLE knowledge.embeddings (
|
| 112 |
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
@@ -118,23 +122,24 @@ gh workflow run deploy-to-hf.yml
|
|
| 118 |
created_at TIMESTAMPTZ DEFAULT NOW(),
|
| 119 |
updated_at TIMESTAMPTZ DEFAULT NOW()
|
| 120 |
);
|
| 121 |
-
|
| 122 |
-- Create indexes for performance
|
| 123 |
CREATE INDEX IF NOT EXISTS idx_embeddings_collection ON knowledge.embeddings(collection_name);
|
| 124 |
CREATE INDEX IF NOT EXISTS idx_embeddings_content_id ON knowledge.embeddings(content_id);
|
| 125 |
-
CREATE INDEX IF NOT EXISTS idx_embeddings_vector ON knowledge.embeddings
|
| 126 |
USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
|
| 127 |
```
|
| 128 |
|
| 129 |
2. **Configure Row Level Security**:
|
|
|
|
| 130 |
```sql
|
| 131 |
-- Enable RLS
|
| 132 |
ALTER TABLE knowledge.embeddings ENABLE ROW LEVEL SECURITY;
|
| 133 |
-
|
| 134 |
-- Allow authenticated users to read embeddings
|
| 135 |
CREATE POLICY "Users can read embeddings" ON knowledge.embeddings
|
| 136 |
FOR SELECT TO authenticated USING (true);
|
| 137 |
-
|
| 138 |
-- Allow service role to manage embeddings
|
| 139 |
CREATE POLICY "Service role can manage embeddings" ON knowledge.embeddings
|
| 140 |
FOR ALL TO service_role USING (true);
|
|
@@ -193,7 +198,7 @@ docker-compose -f docker/docker-compose.yml restart n8n
|
|
| 193 |
The system automatically syncs content from these repositories:
|
| 194 |
|
| 195 |
- **n8n Knowledge**: `/projects/n8n` - Workflow examples and best practices
|
| 196 |
-
- **Video & Animation**: `/projects/videos-e-animacoes` - Multimedia processing guides
|
| 197 |
- **Midjourney Prompts**: `/projects/midjorney-prompt` - AI art generation prompts
|
| 198 |
|
| 199 |
### Manual Knowledge Sync
|
|
@@ -225,6 +230,7 @@ Query the knowledge base in n8n workflows:
|
|
| 225 |
### Automated Backups
|
| 226 |
|
| 227 |
Daily backups include:
|
|
|
|
| 228 |
- All n8n workflows (exported as JSON)
|
| 229 |
- Encrypted credentials
|
| 230 |
- Database schema
|
|
@@ -268,12 +274,13 @@ curl http://localhost:8000/api/v1/heartbeat
|
|
| 268 |
### Performance Tuning
|
| 269 |
|
| 270 |
**Database Optimization**:
|
|
|
|
| 271 |
```sql
|
| 272 |
-- Monitor query performance
|
| 273 |
-
SELECT query, mean_exec_time, calls
|
| 274 |
-
FROM pg_stat_statements
|
| 275 |
-
WHERE query LIKE '%n8n%'
|
| 276 |
-
ORDER BY mean_exec_time DESC
|
| 277 |
LIMIT 10;
|
| 278 |
|
| 279 |
-- Optimize vector searches
|
|
@@ -281,6 +288,7 @@ SET ivfflat.probes = 10;
|
|
| 281 |
```
|
| 282 |
|
| 283 |
**Container Resources**:
|
|
|
|
| 284 |
```yaml
|
| 285 |
# docker-compose.yml resource limits
|
| 286 |
services:
|
|
@@ -288,10 +296,10 @@ services:
|
|
| 288 |
deploy:
|
| 289 |
resources:
|
| 290 |
limits:
|
| 291 |
-
cpus:
|
| 292 |
memory: 4G
|
| 293 |
reservations:
|
| 294 |
-
cpus:
|
| 295 |
memory: 2G
|
| 296 |
```
|
| 297 |
|
|
@@ -327,6 +335,7 @@ chmod 600 config/credentials/*
|
|
| 327 |
### Common Issues
|
| 328 |
|
| 329 |
**Connection Problems**:
|
|
|
|
| 330 |
```bash
|
| 331 |
# Test database connection
|
| 332 |
docker exec n8n-automation psql "$DB_POSTGRESDB_HOST" -U "$DB_POSTGRESDB_USER" -c "\l"
|
|
@@ -339,6 +348,7 @@ curl -I "$WEBHOOK_URL/healthz"
|
|
| 339 |
```
|
| 340 |
|
| 341 |
**Deployment Issues**:
|
|
|
|
| 342 |
```bash
|
| 343 |
# Check Hugging Face Space status
|
| 344 |
curl -I "https://huggingface.co/spaces/$HF_USERNAME/$HF_SPACE_NAME"
|
|
@@ -349,6 +359,7 @@ gh run view [run-id] --log
|
|
| 349 |
```
|
| 350 |
|
| 351 |
**Knowledge Sync Problems**:
|
|
|
|
| 352 |
```bash
|
| 353 |
# Manual knowledge sync debug
|
| 354 |
./scripts/sync-knowledge.sh
|
|
@@ -366,16 +377,18 @@ with open('knowledge/n8n/n8n_embeddings.json') as f:
|
|
| 366 |
### Recovery Procedures
|
| 367 |
|
| 368 |
**Emergency Restore**:
|
|
|
|
| 369 |
1. Stop all services: `docker-compose down`
|
| 370 |
2. Restore from latest backup: `./scripts/restore.sh [backup-name]`
|
| 371 |
3. Restart services: `docker-compose up -d`
|
| 372 |
4. Verify functionality: Access web interface
|
| 373 |
|
| 374 |
**Database Recovery**:
|
|
|
|
| 375 |
```sql
|
| 376 |
-- Check database integrity
|
| 377 |
-
SELECT schemaname, tablename, n_tup_ins, n_tup_upd, n_tup_del
|
| 378 |
-
FROM pg_stat_user_tables
|
| 379 |
WHERE schemaname = 'public';
|
| 380 |
|
| 381 |
-- Rebuild vector indexes if needed
|
|
@@ -455,4 +468,28 @@ This project is licensed under the Apache License 2.0 - see the LICENSE file for
|
|
| 455 |
|
| 456 |
---
|
| 457 |
|
| 458 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
## 🚀 Features
|
| 6 |
|
| 7 |
### Core Platform
|
| 8 |
+
|
| 9 |
- **n8n v1.17.1**: Self-hosted workflow automation platform
|
| 10 |
- **Hugging Face Spaces**: Docker-based deployment with automatic scaling
|
| 11 |
- **Supabase PostgreSQL**: SSL-encrypted database with pgvector extension
|
| 12 |
- **ChromaDB**: Vector store for embeddings and AI-powered search
|
| 13 |
|
| 14 |
### AI & Automation
|
| 15 |
+
|
| 16 |
- **LangChain Integration**: Advanced AI workflow capabilities
|
| 17 |
- **Multi-Model Support**: OpenAI GPT, Anthropic Claude, Google Vertex AI
|
| 18 |
- **Vector Knowledge Base**: Automated content ingestion with embeddings
|
| 19 |
- **Community Nodes**: Extended functionality with custom AI nodes
|
| 20 |
|
| 21 |
### DevOps & Monitoring
|
| 22 |
+
|
| 23 |
- **GitHub Actions CI/CD**: Automated deployment and maintenance
|
| 24 |
- **Automated Backups**: Daily workflow and configuration backups
|
| 25 |
- **Knowledge Sync**: Multi-repository content synchronization
|
|
|
|
| 103 |
### Supabase Configuration
|
| 104 |
|
| 105 |
1. **Create Supabase Project**:
|
| 106 |
+
|
| 107 |
```sql
|
| 108 |
-- Enable pgvector extension
|
| 109 |
CREATE EXTENSION IF NOT EXISTS vector;
|
| 110 |
+
|
| 111 |
-- Create knowledge base schema
|
| 112 |
CREATE SCHEMA IF NOT EXISTS knowledge;
|
| 113 |
+
|
| 114 |
-- Create embeddings table
|
| 115 |
CREATE TABLE knowledge.embeddings (
|
| 116 |
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
|
|
| 122 |
created_at TIMESTAMPTZ DEFAULT NOW(),
|
| 123 |
updated_at TIMESTAMPTZ DEFAULT NOW()
|
| 124 |
);
|
| 125 |
+
|
| 126 |
-- Create indexes for performance
|
| 127 |
CREATE INDEX IF NOT EXISTS idx_embeddings_collection ON knowledge.embeddings(collection_name);
|
| 128 |
CREATE INDEX IF NOT EXISTS idx_embeddings_content_id ON knowledge.embeddings(content_id);
|
| 129 |
+
CREATE INDEX IF NOT EXISTS idx_embeddings_vector ON knowledge.embeddings
|
| 130 |
USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
|
| 131 |
```
|
| 132 |
|
| 133 |
2. **Configure Row Level Security**:
|
| 134 |
+
|
| 135 |
```sql
|
| 136 |
-- Enable RLS
|
| 137 |
ALTER TABLE knowledge.embeddings ENABLE ROW LEVEL SECURITY;
|
| 138 |
+
|
| 139 |
-- Allow authenticated users to read embeddings
|
| 140 |
CREATE POLICY "Users can read embeddings" ON knowledge.embeddings
|
| 141 |
FOR SELECT TO authenticated USING (true);
|
| 142 |
+
|
| 143 |
-- Allow service role to manage embeddings
|
| 144 |
CREATE POLICY "Service role can manage embeddings" ON knowledge.embeddings
|
| 145 |
FOR ALL TO service_role USING (true);
|
|
|
|
| 198 |
The system automatically syncs content from these repositories:
|
| 199 |
|
| 200 |
- **n8n Knowledge**: `/projects/n8n` - Workflow examples and best practices
|
| 201 |
+
- **Video & Animation**: `/projects/videos-e-animacoes` - Multimedia processing guides
|
| 202 |
- **Midjourney Prompts**: `/projects/midjorney-prompt` - AI art generation prompts
|
| 203 |
|
| 204 |
### Manual Knowledge Sync
|
|
|
|
| 230 |
### Automated Backups
|
| 231 |
|
| 232 |
Daily backups include:
|
| 233 |
+
|
| 234 |
- All n8n workflows (exported as JSON)
|
| 235 |
- Encrypted credentials
|
| 236 |
- Database schema
|
|
|
|
| 274 |
### Performance Tuning
|
| 275 |
|
| 276 |
**Database Optimization**:
|
| 277 |
+
|
| 278 |
```sql
|
| 279 |
-- Monitor query performance
|
| 280 |
+
SELECT query, mean_exec_time, calls
|
| 281 |
+
FROM pg_stat_statements
|
| 282 |
+
WHERE query LIKE '%n8n%'
|
| 283 |
+
ORDER BY mean_exec_time DESC
|
| 284 |
LIMIT 10;
|
| 285 |
|
| 286 |
-- Optimize vector searches
|
|
|
|
| 288 |
```
|
| 289 |
|
| 290 |
**Container Resources**:
|
| 291 |
+
|
| 292 |
```yaml
|
| 293 |
# docker-compose.yml resource limits
|
| 294 |
services:
|
|
|
|
| 296 |
deploy:
|
| 297 |
resources:
|
| 298 |
limits:
|
| 299 |
+
cpus: "2.0"
|
| 300 |
memory: 4G
|
| 301 |
reservations:
|
| 302 |
+
cpus: "1.0"
|
| 303 |
memory: 2G
|
| 304 |
```
|
| 305 |
|
|
|
|
| 335 |
### Common Issues
|
| 336 |
|
| 337 |
**Connection Problems**:
|
| 338 |
+
|
| 339 |
```bash
|
| 340 |
# Test database connection
|
| 341 |
docker exec n8n-automation psql "$DB_POSTGRESDB_HOST" -U "$DB_POSTGRESDB_USER" -c "\l"
|
|
|
|
| 348 |
```
|
| 349 |
|
| 350 |
**Deployment Issues**:
|
| 351 |
+
|
| 352 |
```bash
|
| 353 |
# Check Hugging Face Space status
|
| 354 |
curl -I "https://huggingface.co/spaces/$HF_USERNAME/$HF_SPACE_NAME"
|
|
|
|
| 359 |
```
|
| 360 |
|
| 361 |
**Knowledge Sync Problems**:
|
| 362 |
+
|
| 363 |
```bash
|
| 364 |
# Manual knowledge sync debug
|
| 365 |
./scripts/sync-knowledge.sh
|
|
|
|
| 377 |
### Recovery Procedures
|
| 378 |
|
| 379 |
**Emergency Restore**:
|
| 380 |
+
|
| 381 |
1. Stop all services: `docker-compose down`
|
| 382 |
2. Restore from latest backup: `./scripts/restore.sh [backup-name]`
|
| 383 |
3. Restart services: `docker-compose up -d`
|
| 384 |
4. Verify functionality: Access web interface
|
| 385 |
|
| 386 |
**Database Recovery**:
|
| 387 |
+
|
| 388 |
```sql
|
| 389 |
-- Check database integrity
|
| 390 |
+
SELECT schemaname, tablename, n_tup_ins, n_tup_upd, n_tup_del
|
| 391 |
+
FROM pg_stat_user_tables
|
| 392 |
WHERE schemaname = 'public';
|
| 393 |
|
| 394 |
-- Rebuild vector indexes if needed
|
|
|
|
| 468 |
|
| 469 |
---
|
| 470 |
|
| 471 |
+
_Built with ❤️ for the n8n automation community_
|
| 472 |
+
|
| 473 |
+
### ChromaDB
|
| 474 |
+
|
| 475 |
+
ChromaDB é utilizado como vector store para armazenar embeddings e permitir buscas semânticas avançadas nos fluxos de trabalho do n8n.
|
| 476 |
+
|
| 477 |
+
#### Configuração
|
| 478 |
+
|
| 479 |
+
1. **Obtenha seu token de autenticação (API Key) no painel do Chroma Cloud**.
|
| 480 |
+
2. No arquivo `.env`, adicione as variáveis:
|
| 481 |
+
|
| 482 |
+
```dotenv
|
| 483 |
+
CHROMA_AUTH_TOKEN=ck-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
|
| 484 |
+
CHROMA_HOST=api.chroma.com
|
| 485 |
+
CHROMA_PORT=443
|
| 486 |
+
```
|
| 487 |
+
|
| 488 |
+
3. Certifique-se de que o serviço Chroma está acessível e que o token está correto.
|
| 489 |
+
|
| 490 |
+
4. Para uso local, ajuste `CHROMA_HOST` para `localhost` e `CHROMA_PORT` para a porta configurada.
|
| 491 |
+
|
| 492 |
+
#### Referências
|
| 493 |
+
|
| 494 |
+
- [Documentação ChromaDB](https://docs.trychroma.com/)
|
| 495 |
+
- [Como gerar API Key no Chroma Cloud](https://docs.trychroma.com/cloud)
|
docker/Dockerfile
CHANGED
|
@@ -1,56 +1,39 @@
|
|
| 1 |
-
#
|
| 2 |
-
FROM n8nio/n8n:1.
|
| 3 |
-
|
| 4 |
-
#
|
| 5 |
-
ENV
|
| 6 |
-
ENV
|
| 7 |
-
|
| 8 |
-
ENV
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
ENV N8N_METRICS=true
|
| 10 |
-
ENV
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
ENV
|
| 14 |
-
|
| 15 |
-
#
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
#
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
# Create necessary directories
|
| 31 |
-
RUN mkdir -p /home/node/.n8n/custom \
|
| 32 |
-
&& mkdir -p /home/node/.n8n/nodes \
|
| 33 |
-
&& mkdir -p /home/node/.n8n/credentials \
|
| 34 |
-
&& mkdir -p /home/node/knowledge
|
| 35 |
-
|
| 36 |
-
# Copy custom configurations
|
| 37 |
-
COPY config/custom-nodes.json /home/node/.n8n/custom/
|
| 38 |
-
COPY knowledge/ /home/node/knowledge/
|
| 39 |
-
|
| 40 |
-
# Set proper permissions
|
| 41 |
-
RUN chown -R node:node /home/node/.n8n \
|
| 42 |
-
&& chown -R node:node /home/node/knowledge \
|
| 43 |
-
&& chmod +x /home/node/knowledge/sync-*.sh
|
| 44 |
-
|
| 45 |
-
# Switch back to node user
|
| 46 |
-
USER node
|
| 47 |
-
|
| 48 |
-
# Health check
|
| 49 |
-
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
|
| 50 |
-
CMD curl -f http://localhost:7860/healthz || exit 1
|
| 51 |
-
|
| 52 |
-
# Expose port for Hugging Face Spaces
|
| 53 |
-
EXPOSE 7860
|
| 54 |
-
|
| 55 |
-
# Start n8n
|
| 56 |
-
CMD ["n8n", "start"]
|
|
|
|
| 1 |
+
# Pin the n8n version for predictable upgrades/rollbacks
|
| 2 |
+
FROM n8nio/n8n:1.108.2
|
| 3 |
+
|
| 4 |
+
# Hugging Face Spaces injects $PORT; n8n should listen on it
|
| 5 |
+
ENV N8N_PORT=$PORT
|
| 6 |
+
ENV N8N_PROTOCOL=http
|
| 7 |
+
# Public URL (important for webhooks behind HF proxy)
|
| 8 |
+
ENV WEBHOOK_URL=${WEBHOOK_URL}
|
| 9 |
+
|
| 10 |
+
# Execution & retention (production-friendly defaults)
|
| 11 |
+
ENV EXECUTIONS_MODE=regular
|
| 12 |
+
ENV EXECUTIONS_DATA_SAVE_ON_ERROR=all
|
| 13 |
+
ENV EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
|
| 14 |
+
ENV EXECUTIONS_DATA_PRUNE=true
|
| 15 |
+
ENV EXECUTIONS_DATA_MAX_AGE=336
|
| 16 |
+
ENV QUEUE_BULL_REDIS_DISABLED=true
|
| 17 |
+
|
| 18 |
+
# Health/metrics
|
| 19 |
ENV N8N_METRICS=true
|
| 20 |
+
ENV QUEUE_HEALTH_CHECK_ACTIVE=true
|
| 21 |
+
|
| 22 |
+
# Database (set via Secrets in HF Space)
|
| 23 |
+
# ENV DB_TYPE=postgresdb
|
| 24 |
+
# ENV DB_POSTGRESDB_HOST=
|
| 25 |
+
# ENV DB_POSTGRESDB_PORT=5432
|
| 26 |
+
# ENV DB_POSTGRESDB_DATABASE=
|
| 27 |
+
# ENV DB_POSTGRESDB_USER=
|
| 28 |
+
# ENV DB_POSTGRESDB_PASSWORD=
|
| 29 |
+
# ENV DB_POSTGRESDB_SCHEMA=public
|
| 30 |
+
# ENV DB_POSTGRESDB_SSL=true
|
| 31 |
+
# ENV DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=false
|
| 32 |
+
|
| 33 |
+
# Security
|
| 34 |
+
# ENV N8N_ENCRYPTION_KEY=
|
| 35 |
+
# ENV N8N_USER_MANAGEMENT_JWT_SECRET=
|
| 36 |
+
# Optional: protect UI with Basic Auth
|
| 37 |
+
# ENV N8N_BASIC_AUTH_ACTIVE=true
|
| 38 |
+
# ENV N8N_BASIC_AUTH_USER=
|
| 39 |
+
# ENV N8N_BASIC_AUTH_PASSWORD=
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docker/docker-compose.yml
CHANGED
|
@@ -1,101 +1,33 @@
|
|
| 1 |
-
version:
|
| 2 |
-
|
| 3 |
services:
|
| 4 |
n8n:
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
dockerfile: docker/Dockerfile
|
| 8 |
-
container_name: n8n-automation
|
| 9 |
-
restart: unless-stopped
|
| 10 |
ports:
|
| 11 |
-
- "
|
| 12 |
environment:
|
| 13 |
-
|
| 14 |
-
-
|
| 15 |
-
-
|
| 16 |
-
- N8N_LISTEN_ADDRESS=0.0.0.0
|
| 17 |
-
- N8N_PROTOCOL=https
|
| 18 |
-
- N8N_HOST=${N8N_HOST:-localhost}
|
| 19 |
-
|
| 20 |
-
# Security
|
| 21 |
-
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
|
| 22 |
-
- N8N_USER_MANAGEMENT_JWT_SECRET=${N8N_USER_MANAGEMENT_JWT_SECRET}
|
| 23 |
-
|
| 24 |
-
# Database Configuration
|
| 25 |
- DB_TYPE=postgresdb
|
| 26 |
-
- DB_POSTGRESDB_HOST=${
|
| 27 |
-
- DB_POSTGRESDB_PORT=${
|
| 28 |
-
- DB_POSTGRESDB_DATABASE=${
|
| 29 |
-
- DB_POSTGRESDB_USER=${
|
| 30 |
-
- DB_POSTGRESDB_PASSWORD=${
|
|
|
|
| 31 |
- DB_POSTGRESDB_SSL=true
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
- WEBHOOK_URL=${WEBHOOK_URL}
|
| 35 |
-
- N8N_EDITOR_BASE_URL=${WEBHOOK_URL}
|
| 36 |
-
|
| 37 |
-
# AI and External Integrations
|
| 38 |
-
- GOOGLE_PROJECT_ID=${GOOGLE_PROJECT_ID}
|
| 39 |
-
- GOOGLE_CREDENTIALS_PATH=${GOOGLE_CREDENTIALS_PATH}
|
| 40 |
-
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
| 41 |
-
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
|
| 42 |
-
|
| 43 |
-
# Performance
|
| 44 |
-
- EXECUTIONS_MODE=queue
|
| 45 |
- EXECUTIONS_DATA_SAVE_ON_ERROR=all
|
| 46 |
-
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=
|
| 47 |
-
-
|
| 48 |
-
|
| 49 |
-
# Logging and Monitoring
|
| 50 |
-
- N8N_LOG_LEVEL=info
|
| 51 |
-
- N8N_LOG_OUTPUT=console
|
| 52 |
- N8N_METRICS=true
|
| 53 |
-
|
|
|
|
|
|
|
| 54 |
volumes:
|
| 55 |
-
-
|
| 56 |
-
- ./
|
| 57 |
-
- ./config/credentials:/home/node/.n8n/credentials:ro
|
| 58 |
-
- ./workflows/backup:/home/node/.n8n/backup
|
| 59 |
-
|
| 60 |
-
networks:
|
| 61 |
-
- n8n-network
|
| 62 |
-
|
| 63 |
-
depends_on:
|
| 64 |
-
- vector-store
|
| 65 |
-
- backup-service
|
| 66 |
-
|
| 67 |
-
vector-store:
|
| 68 |
-
image: chromadb/chroma:latest
|
| 69 |
-
container_name: n8n-vector-store
|
| 70 |
restart: unless-stopped
|
| 71 |
-
ports:
|
| 72 |
-
- "8000:8000"
|
| 73 |
-
environment:
|
| 74 |
-
- CHROMA_SERVER_AUTHN_CREDENTIALS=${CHROMA_AUTH_TOKEN}
|
| 75 |
-
- CHROMA_SERVER_AUTHN_PROVIDER=chromadb.auth.token.TokenAuthServerProvider
|
| 76 |
-
volumes:
|
| 77 |
-
- vector_data:/chroma/chroma
|
| 78 |
-
networks:
|
| 79 |
-
- n8n-network
|
| 80 |
-
|
| 81 |
-
backup-service:
|
| 82 |
-
image: alpine:latest
|
| 83 |
-
container_name: n8n-backup
|
| 84 |
-
restart: "no"
|
| 85 |
-
command: /bin/sh -c "while true; do sleep 3600; done"
|
| 86 |
-
volumes:
|
| 87 |
-
- n8n_data:/backup/n8n:ro
|
| 88 |
-
- ./scripts:/scripts:ro
|
| 89 |
-
- ./workflows/backup:/backup/workflows
|
| 90 |
-
networks:
|
| 91 |
-
- n8n-network
|
| 92 |
-
|
| 93 |
-
volumes:
|
| 94 |
-
n8n_data:
|
| 95 |
-
driver: local
|
| 96 |
-
vector_data:
|
| 97 |
-
driver: local
|
| 98 |
-
|
| 99 |
-
networks:
|
| 100 |
-
n8n-network:
|
| 101 |
-
driver: bridge
|
|
|
|
| 1 |
+
version: "3.9"
|
|
|
|
| 2 |
services:
|
| 3 |
n8n:
|
| 4 |
+
image: n8nio/n8n:1.108.2
|
| 5 |
+
container_name: n8n-local
|
|
|
|
|
|
|
|
|
|
| 6 |
ports:
|
| 7 |
+
- "5678:5678"
|
| 8 |
environment:
|
| 9 |
+
- N8N_PORT=5678
|
| 10 |
+
- N8N_PROTOCOL=http
|
| 11 |
+
- WEBHOOK_URL=http://localhost:5678/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
- DB_TYPE=postgresdb
|
| 13 |
+
- DB_POSTGRESDB_HOST=${DB_HOST}
|
| 14 |
+
- DB_POSTGRESDB_PORT=${DB_PORT}
|
| 15 |
+
- DB_POSTGRESDB_DATABASE=${DB_NAME}
|
| 16 |
+
- DB_POSTGRESDB_USER=${DB_USER}
|
| 17 |
+
- DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
|
| 18 |
+
- DB_POSTGRESDB_SCHEMA=public
|
| 19 |
- DB_POSTGRESDB_SSL=true
|
| 20 |
+
- DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=false
|
| 21 |
+
- EXECUTIONS_MODE=regular
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
- EXECUTIONS_DATA_SAVE_ON_ERROR=all
|
| 23 |
+
- EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
|
| 24 |
+
- EXECUTIONS_DATA_PRUNE=true
|
| 25 |
+
- EXECUTIONS_DATA_MAX_AGE=336
|
|
|
|
|
|
|
|
|
|
| 26 |
- N8N_METRICS=true
|
| 27 |
+
- QUEUE_HEALTH_CHECK_ACTIVE=true
|
| 28 |
+
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
|
| 29 |
+
- N8N_USER_MANAGEMENT_JWT_SECRET=${N8N_USER_MANAGEMENT_JWT_SECRET}
|
| 30 |
volumes:
|
| 31 |
+
- ./config:/config
|
| 32 |
+
- ./workflows:/workflows
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
restart: unless-stopped
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
package.json
CHANGED
|
@@ -5,7 +5,7 @@
|
|
| 5 |
"private": true,
|
| 6 |
"keywords": [
|
| 7 |
"n8n",
|
| 8 |
-
"workflow-automation",
|
| 9 |
"ai-integration",
|
| 10 |
"langchain",
|
| 11 |
"vector-database",
|
|
@@ -14,25 +14,25 @@
|
|
| 14 |
"devops"
|
| 15 |
],
|
| 16 |
"scripts": {
|
| 17 |
-
"dev": "echo 'Docker
|
| 18 |
-
"start": "echo 'Docker
|
| 19 |
-
"stop": "
|
| 20 |
-
"logs": "
|
| 21 |
"backup": "bash scripts/backup.sh",
|
| 22 |
"restore": "bash scripts/restore.sh",
|
| 23 |
"sync-knowledge": "bash scripts/sync-knowledge.sh",
|
| 24 |
-
"build": "
|
| 25 |
-
"clean": "
|
| 26 |
"deploy": "gh workflow run deploy-to-hf.yml",
|
| 27 |
"test": "bash scripts/test-infrastructure.sh"
|
| 28 |
},
|
| 29 |
"repository": {
|
| 30 |
"type": "git",
|
| 31 |
-
"url": "https://github.com/
|
| 32 |
},
|
| 33 |
"author": {
|
| 34 |
-
"name": "
|
| 35 |
-
"email": "
|
| 36 |
},
|
| 37 |
"license": "Apache-2.0",
|
| 38 |
"engines": {
|
|
@@ -42,4 +42,4 @@
|
|
| 42 |
"devDependencies": {
|
| 43 |
"@types/node": "^20.0.0"
|
| 44 |
}
|
| 45 |
-
}
|
|
|
|
| 5 |
"private": true,
|
| 6 |
"keywords": [
|
| 7 |
"n8n",
|
| 8 |
+
"workflow-automation",
|
| 9 |
"ai-integration",
|
| 10 |
"langchain",
|
| 11 |
"vector-database",
|
|
|
|
| 14 |
"devops"
|
| 15 |
],
|
| 16 |
"scripts": {
|
| 17 |
+
"dev": "echo 'Use Docker Compose para rodar o ambiente: docker-compose -f docker/docker-compose.yml up -d'",
|
| 18 |
+
"start": "echo 'Use Docker Compose para rodar o ambiente: docker-compose -f docker/docker-compose.yml up -d'",
|
| 19 |
+
"stop": "docker-compose -f docker/docker-compose.yml down",
|
| 20 |
+
"logs": "docker-compose -f docker/docker-compose.yml logs -f n8n",
|
| 21 |
"backup": "bash scripts/backup.sh",
|
| 22 |
"restore": "bash scripts/restore.sh",
|
| 23 |
"sync-knowledge": "bash scripts/sync-knowledge.sh",
|
| 24 |
+
"build": "docker-compose -f docker/docker-compose.yml build",
|
| 25 |
+
"clean": "docker system prune -af",
|
| 26 |
"deploy": "gh workflow run deploy-to-hf.yml",
|
| 27 |
"test": "bash scripts/test-infrastructure.sh"
|
| 28 |
},
|
| 29 |
"repository": {
|
| 30 |
"type": "git",
|
| 31 |
+
"url": "https://github.com/danilonovaisv/n8n-infra.git"
|
| 32 |
},
|
| 33 |
"author": {
|
| 34 |
+
"name": "Danilo Novais",
|
| 35 |
+
"email": "danilo_novais@yahoo.com.br"
|
| 36 |
},
|
| 37 |
"license": "Apache-2.0",
|
| 38 |
"engines": {
|
|
|
|
| 42 |
"devDependencies": {
|
| 43 |
"@types/node": "^20.0.0"
|
| 44 |
}
|
| 45 |
+
}
|
scripts/backup.sh
CHANGED
|
@@ -1,198 +1,23 @@
|
|
| 1 |
-
#!/bin/bash
|
| 2 |
-
|
| 3 |
-
# n8n Infrastructure Backup Script
|
| 4 |
-
# Backs up workflows, credentials, and configurations
|
| 5 |
-
# Usage: ./backup.sh [backup-name]
|
| 6 |
-
|
| 7 |
set -euo pipefail
|
| 8 |
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
# Colors for output
|
| 17 |
-
RED='\033[0;31m'
|
| 18 |
-
GREEN='\033[0;32m'
|
| 19 |
-
YELLOW='\033[1;33m'
|
| 20 |
-
NC='\033[0m' # No Color
|
| 21 |
-
|
| 22 |
-
log_info() {
|
| 23 |
-
echo -e "${GREEN}[INFO]${NC} $1"
|
| 24 |
-
}
|
| 25 |
-
|
| 26 |
-
log_warn() {
|
| 27 |
-
echo -e "${YELLOW}[WARN]${NC} $1"
|
| 28 |
-
}
|
| 29 |
-
|
| 30 |
-
log_error() {
|
| 31 |
-
echo -e "${RED}[ERROR]${NC} $1"
|
| 32 |
-
}
|
| 33 |
-
|
| 34 |
-
# Check if Docker is running
|
| 35 |
-
check_docker() {
|
| 36 |
-
if ! docker ps > /dev/null 2>&1; then
|
| 37 |
-
log_error "Docker is not running or accessible"
|
| 38 |
-
exit 1
|
| 39 |
-
fi
|
| 40 |
-
}
|
| 41 |
-
|
| 42 |
-
# Create backup directory
|
| 43 |
-
create_backup_dir() {
|
| 44 |
-
local backup_path="$BACKUP_DIR/$BACKUP_NAME"
|
| 45 |
-
mkdir -p "$backup_path"
|
| 46 |
-
echo "$backup_path"
|
| 47 |
-
}
|
| 48 |
-
|
| 49 |
-
# Backup n8n workflows via API
|
| 50 |
-
backup_workflows() {
|
| 51 |
-
local backup_path="$1"
|
| 52 |
-
local container_name="n8n-automation"
|
| 53 |
-
|
| 54 |
-
log_info "Backing up n8n workflows..."
|
| 55 |
-
|
| 56 |
-
if docker ps --format '{{.Names}}' | grep -q "^$container_name$"; then
|
| 57 |
-
# Export workflows using n8n CLI inside container
|
| 58 |
-
docker exec "$container_name" n8n export:workflow --all --output="/home/node/.n8n/backup/workflows_$TIMESTAMP.json" || {
|
| 59 |
-
log_warn "Failed to export workflows via CLI, trying API approach"
|
| 60 |
-
return 1
|
| 61 |
-
}
|
| 62 |
-
|
| 63 |
-
# Copy exported file to backup directory
|
| 64 |
-
docker cp "$container_name:/home/node/.n8n/backup/workflows_$TIMESTAMP.json" "$backup_path/"
|
| 65 |
-
log_info "Workflows backed up successfully"
|
| 66 |
-
else
|
| 67 |
-
log_error "n8n container not found or not running"
|
| 68 |
-
return 1
|
| 69 |
-
fi
|
| 70 |
-
}
|
| 71 |
-
|
| 72 |
-
# Backup credentials (encrypted)
|
| 73 |
-
backup_credentials() {
|
| 74 |
-
local backup_path="$1"
|
| 75 |
-
local container_name="n8n-automation"
|
| 76 |
-
|
| 77 |
-
log_info "Backing up encrypted credentials..."
|
| 78 |
-
|
| 79 |
-
if docker ps --format '{{.Names}}' | grep -q "^$container_name$"; then
|
| 80 |
-
docker cp "$container_name:/home/node/.n8n/credentials" "$backup_path/" 2>/dev/null || {
|
| 81 |
-
log_warn "No credentials found or access denied"
|
| 82 |
-
}
|
| 83 |
-
fi
|
| 84 |
-
}
|
| 85 |
-
|
| 86 |
-
# Backup database schema and essential data
|
| 87 |
-
backup_database() {
|
| 88 |
-
local backup_path="$1"
|
| 89 |
-
|
| 90 |
-
log_info "Backing up database schema..."
|
| 91 |
-
|
| 92 |
-
if [[ -n "${DB_POSTGRESDB_PASSWORD:-}" ]]; then
|
| 93 |
-
export PGPASSWORD="$DB_POSTGRESDB_PASSWORD"
|
| 94 |
-
|
| 95 |
-
pg_dump \
|
| 96 |
-
--host="${DB_POSTGRESDB_HOST}" \
|
| 97 |
-
--port="${DB_POSTGRESDB_PORT:-5432}" \
|
| 98 |
-
--username="${DB_POSTGRESDB_USER}" \
|
| 99 |
-
--dbname="${DB_POSTGRESDB_DATABASE}" \
|
| 100 |
-
--schema-only \
|
| 101 |
-
--no-owner \
|
| 102 |
-
--no-privileges \
|
| 103 |
-
> "$backup_path/schema_$TIMESTAMP.sql" || {
|
| 104 |
-
log_error "Database backup failed"
|
| 105 |
-
return 1
|
| 106 |
-
}
|
| 107 |
-
|
| 108 |
-
log_info "Database schema backed up successfully"
|
| 109 |
-
else
|
| 110 |
-
log_warn "Database credentials not available, skipping database backup"
|
| 111 |
-
fi
|
| 112 |
-
}
|
| 113 |
-
|
| 114 |
-
# Backup knowledge base content
|
| 115 |
-
backup_knowledge() {
|
| 116 |
-
local backup_path="$1"
|
| 117 |
-
|
| 118 |
-
log_info "Backing up knowledge base..."
|
| 119 |
-
|
| 120 |
-
if [[ -d "$PROJECT_ROOT/knowledge" ]]; then
|
| 121 |
-
cp -r "$PROJECT_ROOT/knowledge" "$backup_path/"
|
| 122 |
-
log_info "Knowledge base backed up successfully"
|
| 123 |
-
else
|
| 124 |
-
log_warn "Knowledge base directory not found"
|
| 125 |
-
fi
|
| 126 |
-
}
|
| 127 |
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
cat > "$backup_path/backup_metadata.json" << EOF
|
| 133 |
-
{
|
| 134 |
-
"backup_name": "$BACKUP_NAME",
|
| 135 |
-
"timestamp": "$TIMESTAMP",
|
| 136 |
-
"created_at": "$(date -Iseconds)",
|
| 137 |
-
"n8n_version": "$(docker exec n8n-automation n8n --version 2>/dev/null || echo 'unknown')",
|
| 138 |
-
"backup_type": "full",
|
| 139 |
-
"components": [
|
| 140 |
-
"workflows",
|
| 141 |
-
"credentials",
|
| 142 |
-
"database_schema",
|
| 143 |
-
"knowledge_base"
|
| 144 |
-
],
|
| 145 |
-
"notes": "Automated backup created by backup.sh script"
|
| 146 |
-
}
|
| 147 |
-
EOF
|
| 148 |
-
}
|
| 149 |
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
log_info "Cleaning up backups older than $retention_days days..."
|
| 155 |
-
|
| 156 |
-
find "$BACKUP_DIR" -type d -name "n8n_backup_*" -mtime "+$retention_days" -exec rm -rf {} + 2>/dev/null || true
|
| 157 |
-
}
|
| 158 |
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
log_info "Starting n8n infrastructure backup: $BACKUP_NAME"
|
| 162 |
-
|
| 163 |
-
# Preliminary checks
|
| 164 |
-
check_docker
|
| 165 |
-
|
| 166 |
-
# Load environment variables
|
| 167 |
-
if [[ -f "$PROJECT_ROOT/.env" ]]; then
|
| 168 |
-
source "$PROJECT_ROOT/.env"
|
| 169 |
-
fi
|
| 170 |
-
|
| 171 |
-
# Create backup directory
|
| 172 |
-
local backup_path=$(create_backup_dir)
|
| 173 |
-
log_info "Backup directory created: $backup_path"
|
| 174 |
-
|
| 175 |
-
# Perform backups
|
| 176 |
-
backup_workflows "$backup_path" || log_warn "Workflow backup incomplete"
|
| 177 |
-
backup_credentials "$backup_path" || log_warn "Credentials backup incomplete"
|
| 178 |
-
backup_database "$backup_path" || log_warn "Database backup incomplete"
|
| 179 |
-
backup_knowledge "$backup_path" || log_warn "Knowledge base backup incomplete"
|
| 180 |
-
|
| 181 |
-
# Create metadata
|
| 182 |
-
create_metadata "$backup_path"
|
| 183 |
-
|
| 184 |
-
# Cleanup old backups
|
| 185 |
-
cleanup_old_backups
|
| 186 |
-
|
| 187 |
-
log_info "Backup completed successfully: $backup_path"
|
| 188 |
-
|
| 189 |
-
# Optional: Create compressed archive
|
| 190 |
-
if command -v tar > /dev/null; then
|
| 191 |
-
local archive_name="$BACKUP_DIR/${BACKUP_NAME}.tar.gz"
|
| 192 |
-
tar -czf "$archive_name" -C "$BACKUP_DIR" "$BACKUP_NAME"
|
| 193 |
-
log_info "Compressed backup created: $archive_name"
|
| 194 |
-
fi
|
| 195 |
-
}
|
| 196 |
|
| 197 |
-
|
| 198 |
-
main "$@"
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
set -euo pipefail
|
| 3 |
|
| 4 |
+
: "${DB_HOST?Missing DB_HOST}"
|
| 5 |
+
: "${DB_PORT:=5432}"
|
| 6 |
+
: "${DB_NAME?Missing DB_NAME}"
|
| 7 |
+
: "${DB_USER?Missing DB_USER}"
|
| 8 |
+
: "${DB_PASSWORD?Missing DB_PASSWORD}"
|
| 9 |
+
: "${N8N_BASE_URL?Missing N8N_BASE_URL}"
|
| 10 |
+
: "${N8N_API_KEY?Missing N8N_API_KEY}"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
+
TS=$(date +%Y%m%d-%H%M%S)
|
| 13 |
+
OUTDIR="workflows/backup/${TS}"
|
| 14 |
+
mkdir -p "$OUTDIR"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
+
echo "==> Dumping Postgres (Supabase) ..."
|
| 17 |
+
export PGPASSWORD="${DB_PASSWORD}"
|
| 18 |
+
pg_dump -h "${DB_HOST}" -p "${DB_PORT}" -U "${DB_USER}" -d "${DB_NAME}" -F c -Z 5 -f "${OUTDIR}/db.dump"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
echo "==> Exporting n8n workflows ..."
|
| 21 |
+
curl -sS -H "X-N8N-API-KEY: ${N8N_API_KEY}" "${N8N_BASE_URL}/rest/workflows" > "${OUTDIR}/workflows.json"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
+
echo "==> Done. Artifacts at ${OUTDIR}"
|
|
|
scripts/restore.sh
CHANGED
|
@@ -1,204 +1,14 @@
|
|
| 1 |
-
#!/bin/bash
|
| 2 |
-
|
| 3 |
-
# n8n Infrastructure Restore Script
|
| 4 |
-
# Restores workflows, credentials, and configurations from backup
|
| 5 |
-
# Usage: ./restore.sh <backup-name>
|
| 6 |
-
|
| 7 |
set -euo pipefail
|
| 8 |
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
log_info() {
|
| 21 |
-
echo -e "${GREEN}[INFO]${NC} $1"
|
| 22 |
-
}
|
| 23 |
-
|
| 24 |
-
log_warn() {
|
| 25 |
-
echo -e "${YELLOW}[WARN]${NC} $1"
|
| 26 |
-
}
|
| 27 |
-
|
| 28 |
-
log_error() {
|
| 29 |
-
echo -e "${RED}[ERROR]${NC} $1"
|
| 30 |
-
}
|
| 31 |
-
|
| 32 |
-
# Validate backup name argument
|
| 33 |
-
if [[ $# -eq 0 ]]; then
|
| 34 |
-
log_error "Usage: $0 <backup-name>"
|
| 35 |
-
log_info "Available backups:"
|
| 36 |
-
ls -1 "$BACKUP_DIR" | grep -E "^n8n_backup_" | head -10
|
| 37 |
-
exit 1
|
| 38 |
-
fi
|
| 39 |
-
|
| 40 |
-
BACKUP_NAME="$1"
|
| 41 |
-
BACKUP_PATH="$BACKUP_DIR/$BACKUP_NAME"
|
| 42 |
-
|
| 43 |
-
# Check if backup exists
|
| 44 |
-
if [[ ! -d "$BACKUP_PATH" ]]; then
|
| 45 |
-
# Try with .tar.gz extension
|
| 46 |
-
if [[ -f "$BACKUP_DIR/$BACKUP_NAME.tar.gz" ]]; then
|
| 47 |
-
log_info "Found compressed backup, extracting..."
|
| 48 |
-
tar -xzf "$BACKUP_DIR/$BACKUP_NAME.tar.gz" -C "$BACKUP_DIR"
|
| 49 |
-
else
|
| 50 |
-
log_error "Backup not found: $BACKUP_NAME"
|
| 51 |
-
exit 1
|
| 52 |
-
fi
|
| 53 |
-
fi
|
| 54 |
-
|
| 55 |
-
# Verify backup integrity
|
| 56 |
-
verify_backup() {
|
| 57 |
-
local metadata_file="$BACKUP_PATH/backup_metadata.json"
|
| 58 |
-
|
| 59 |
-
if [[ ! -f "$metadata_file" ]]; then
|
| 60 |
-
log_warn "Backup metadata not found, proceeding with caution"
|
| 61 |
-
return 0
|
| 62 |
-
fi
|
| 63 |
-
|
| 64 |
-
log_info "Backup verification:"
|
| 65 |
-
cat "$metadata_file" | jq -r '.backup_name, .created_at, .backup_type'
|
| 66 |
-
}
|
| 67 |
-
|
| 68 |
-
# Restore workflows
|
| 69 |
-
restore_workflows() {
|
| 70 |
-
local container_name="n8n-automation"
|
| 71 |
-
|
| 72 |
-
log_info "Restoring workflows..."
|
| 73 |
-
|
| 74 |
-
if ! docker ps --format '{{.Names}}' | grep -q "^$container_name$"; then
|
| 75 |
-
log_error "n8n container not running. Start the container first."
|
| 76 |
-
return 1
|
| 77 |
-
fi
|
| 78 |
-
|
| 79 |
-
# Find workflow backup file
|
| 80 |
-
local workflow_file=$(find "$BACKUP_PATH" -name "workflows_*.json" | head -1)
|
| 81 |
-
|
| 82 |
-
if [[ -n "$workflow_file" ]]; then
|
| 83 |
-
# Copy workflow file to container
|
| 84 |
-
docker cp "$workflow_file" "$container_name:/tmp/workflows_restore.json"
|
| 85 |
-
|
| 86 |
-
# Import workflows
|
| 87 |
-
docker exec "$container_name" n8n import:workflow --input="/tmp/workflows_restore.json" || {
|
| 88 |
-
log_error "Failed to import workflows"
|
| 89 |
-
return 1
|
| 90 |
-
}
|
| 91 |
-
|
| 92 |
-
log_info "Workflows restored successfully"
|
| 93 |
-
else
|
| 94 |
-
log_warn "No workflow backup file found"
|
| 95 |
-
fi
|
| 96 |
-
}
|
| 97 |
-
|
| 98 |
-
# Restore credentials
|
| 99 |
-
restore_credentials() {
|
| 100 |
-
local container_name="n8n-automation"
|
| 101 |
-
|
| 102 |
-
log_info "Restoring credentials..."
|
| 103 |
-
|
| 104 |
-
if [[ -d "$BACKUP_PATH/credentials" ]]; then
|
| 105 |
-
docker cp "$BACKUP_PATH/credentials/." "$container_name:/home/node/.n8n/credentials/"
|
| 106 |
-
log_info "Credentials restored successfully"
|
| 107 |
-
else
|
| 108 |
-
log_warn "No credentials backup found"
|
| 109 |
-
fi
|
| 110 |
-
}
|
| 111 |
-
|
| 112 |
-
# Restore database (schema only, data should be preserved)
|
| 113 |
-
restore_database() {
|
| 114 |
-
log_info "Restoring database schema..."
|
| 115 |
-
|
| 116 |
-
local schema_file=$(find "$BACKUP_PATH" -name "schema_*.sql" | head -1)
|
| 117 |
-
|
| 118 |
-
if [[ -n "$schema_file" && -n "${DB_POSTGRESDB_PASSWORD:-}" ]]; then
|
| 119 |
-
export PGPASSWORD="$DB_POSTGRESDB_PASSWORD"
|
| 120 |
-
|
| 121 |
-
log_warn "This will update database schema. Proceed? (y/N)"
|
| 122 |
-
read -r response
|
| 123 |
-
if [[ "$response" =~ ^[Yy]$ ]]; then
|
| 124 |
-
psql \
|
| 125 |
-
--host="${DB_POSTGRESDB_HOST}" \
|
| 126 |
-
--port="${DB_POSTGRESDB_PORT:-5432}" \
|
| 127 |
-
--username="${DB_POSTGRESDB_USER}" \
|
| 128 |
-
--dbname="${DB_POSTGRESDB_DATABASE}" \
|
| 129 |
-
--file="$schema_file" || {
|
| 130 |
-
log_error "Database restore failed"
|
| 131 |
-
return 1
|
| 132 |
-
}
|
| 133 |
-
log_info "Database schema restored successfully"
|
| 134 |
-
else
|
| 135 |
-
log_info "Database restore skipped"
|
| 136 |
-
fi
|
| 137 |
-
else
|
| 138 |
-
log_warn "No database backup found or credentials missing"
|
| 139 |
-
fi
|
| 140 |
-
}
|
| 141 |
-
|
| 142 |
-
# Restart services after restore
|
| 143 |
-
restart_services() {
|
| 144 |
-
log_info "Restarting n8n services..."
|
| 145 |
-
|
| 146 |
-
docker-compose -f "$PROJECT_ROOT/docker/docker-compose.yml" restart n8n
|
| 147 |
-
|
| 148 |
-
# Wait for service to be ready
|
| 149 |
-
local max_attempts=30
|
| 150 |
-
local attempt=1
|
| 151 |
-
|
| 152 |
-
while [[ $attempt -le $max_attempts ]]; do
|
| 153 |
-
if curl -f http://localhost:7860/healthz > /dev/null 2>&1; then
|
| 154 |
-
log_info "n8n service is ready"
|
| 155 |
-
break
|
| 156 |
-
fi
|
| 157 |
-
|
| 158 |
-
log_info "Waiting for n8n to start... ($attempt/$max_attempts)"
|
| 159 |
-
sleep 10
|
| 160 |
-
((attempt++))
|
| 161 |
-
done
|
| 162 |
-
|
| 163 |
-
if [[ $attempt -gt $max_attempts ]]; then
|
| 164 |
-
log_error "n8n failed to start after restore"
|
| 165 |
-
return 1
|
| 166 |
-
fi
|
| 167 |
-
}
|
| 168 |
-
|
| 169 |
-
# Main restore process
|
| 170 |
-
main() {
|
| 171 |
-
log_info "Starting n8n infrastructure restore: $BACKUP_NAME"
|
| 172 |
-
|
| 173 |
-
# Load environment variables
|
| 174 |
-
if [[ -f "$PROJECT_ROOT/.env" ]]; then
|
| 175 |
-
source "$PROJECT_ROOT/.env"
|
| 176 |
-
fi
|
| 177 |
-
|
| 178 |
-
# Verify backup
|
| 179 |
-
verify_backup
|
| 180 |
-
|
| 181 |
-
# Confirm restore operation
|
| 182 |
-
log_warn "This will restore n8n configuration from backup: $BACKUP_NAME"
|
| 183 |
-
log_warn "Current workflows and credentials may be overwritten. Continue? (y/N)"
|
| 184 |
-
read -r response
|
| 185 |
-
|
| 186 |
-
if [[ ! "$response" =~ ^[Yy]$ ]]; then
|
| 187 |
-
log_info "Restore operation cancelled"
|
| 188 |
-
exit 0
|
| 189 |
-
fi
|
| 190 |
-
|
| 191 |
-
# Perform restore operations
|
| 192 |
-
restore_workflows || log_warn "Workflow restore incomplete"
|
| 193 |
-
restore_credentials || log_warn "Credentials restore incomplete"
|
| 194 |
-
restore_database || log_warn "Database restore incomplete"
|
| 195 |
-
|
| 196 |
-
# Restart services
|
| 197 |
-
restart_services
|
| 198 |
-
|
| 199 |
-
log_info "Restore completed successfully"
|
| 200 |
-
log_info "Access your n8n instance at: ${WEBHOOK_URL:-http://localhost:7860}"
|
| 201 |
-
}
|
| 202 |
-
|
| 203 |
-
# Run main function
|
| 204 |
-
main "$@"
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
set -euo pipefail
|
| 3 |
|
| 4 |
+
: "${DB_HOST?Missing DB_HOST}"
|
| 5 |
+
: "${DB_PORT:=5432}"
|
| 6 |
+
: "${DB_NAME?Missing DB_NAME}"
|
| 7 |
+
: "${DB_USER?Missing DB_USER}"
|
| 8 |
+
: "${DB_PASSWORD?Missing DB_PASSWORD}"
|
| 9 |
+
: "${DUMP_PATH?Usage: DUMP_PATH=/path/to/db.dump ./scripts/restore.sh}"
|
| 10 |
+
|
| 11 |
+
echo "==> Restoring Postgres from ${DUMP_PATH} ..."
|
| 12 |
+
export PGPASSWORD="${DB_PASSWORD}"
|
| 13 |
+
pg_restore -h "${DB_HOST}" -p "${DB_PORT}" -U "${DB_USER}" -d "${DB_NAME}" --clean --if-exists "${DUMP_PATH}"
|
| 14 |
+
echo "==> Done."
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
scripts/sync-knowledge.mjs
ADDED
|
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
// Node 20 script: sync knowledge repos into Supabase with embeddings
|
| 2 |
+
// Requires: OPENAI_API_KEY, SUPABASE_URL, SUPABASE_SERVICE_ROLE_KEY, KNOWLEDGE_REPO_URL, KNOWLEDGE_DIRS
|
| 3 |
+
import { createClient } from '@supabase/supabase-js';
|
| 4 |
+
import crypto from 'node:crypto';
|
| 5 |
+
import { execSync } from 'node:child_process';
|
| 6 |
+
import fs from 'node:fs';
|
| 7 |
+
import path from 'node:path';
|
| 8 |
+
import process from 'node:process';
|
| 9 |
+
import OpenAI from 'openai';
|
| 10 |
+
|
| 11 |
+
const {
|
| 12 |
+
OPENAI_API_KEY,
|
| 13 |
+
SUPABASE_URL,
|
| 14 |
+
SUPABASE_SERVICE_ROLE_KEY,
|
| 15 |
+
KNOWLEDGE_REPO_URL,
|
| 16 |
+
KNOWLEDGE_DIRS = 'projects/n8n,projects/videos-e-animacoes,projects/midjorney-prompt',
|
| 17 |
+
} = process.env;
|
| 18 |
+
|
| 19 |
+
if (!SUPABASE_URL || !SUPABASE_SERVICE_ROLE_KEY || !KNOWLEDGE_REPO_URL) {
|
| 20 |
+
console.error('Missing env SUPABASE_URL or SUPABASE_SERVICE_ROLE_KEY or KNOWLEDGE_REPO_URL');
|
| 21 |
+
process.exit(1);
|
| 22 |
+
}
|
| 23 |
+
|
| 24 |
+
const openai = OPENAI_API_KEY ? new OpenAI({ apiKey: OPENAI_API_KEY }) : null;
|
| 25 |
+
const supabase = createClient(SUPABASE_URL, SUPABASE_SERVICE_ROLE_KEY);
|
| 26 |
+
|
| 27 |
+
const workdir = path.resolve('knowledge');
|
| 28 |
+
if (!fs.existsSync(workdir)) fs.mkdirSync(workdir, { recursive: true });
|
| 29 |
+
|
| 30 |
+
const repoDir = path.join(workdir, 'CHATGPT-knowledge-base');
|
| 31 |
+
if (!fs.existsSync(repoDir)) {
|
| 32 |
+
console.log('Cloning KB repo...');
|
| 33 |
+
execSync(`git clone --depth 1 ${KNOWLEDGE_REPO_URL} ${repoDir}`, { stdio: 'inherit' });
|
| 34 |
+
} else {
|
| 35 |
+
console.log('Pulling KB repo...');
|
| 36 |
+
execSync(`git -C ${repoDir} pull`, { stdio: 'inherit' });
|
| 37 |
+
}
|
| 38 |
+
|
| 39 |
+
const dirs = KNOWLEDGE_DIRS.split(',').map(s => s.trim());
|
| 40 |
+
|
| 41 |
+
function sha256(s){ return crypto.createHash('sha256').update(s).digest('hex'); }
|
| 42 |
+
|
| 43 |
+
async function upsertDoc(pth, content) {
|
| 44 |
+
const title = path.basename(pth);
|
| 45 |
+
const hash = sha256(content);
|
| 46 |
+
|
| 47 |
+
// Upsert document
|
| 48 |
+
const { data: doc, error: docErr } = await supabase
|
| 49 |
+
.from('knowledge.documents')
|
| 50 |
+
.upsert({ path: pth, title, content, hash }, { onConflict: 'path' })
|
| 51 |
+
.select()
|
| 52 |
+
.single();
|
| 53 |
+
if (docErr) throw docErr;
|
| 54 |
+
|
| 55 |
+
if (openai) {
|
| 56 |
+
// Embedding
|
| 57 |
+
const input = content.slice(0, 12000); // truncate
|
| 58 |
+
const emb = await openai.embeddings.create({
|
| 59 |
+
model: 'text-embedding-3-large',
|
| 60 |
+
input
|
| 61 |
+
});
|
| 62 |
+
const vector = emb.data[0].embedding;
|
| 63 |
+
|
| 64 |
+
const { error: embErr } = await supabase
|
| 65 |
+
.from('knowledge.embeddings')
|
| 66 |
+
.upsert({ doc_id: doc.id, embedding: vector, model: 'text-embedding-3-large' });
|
| 67 |
+
if (embErr) throw embErr;
|
| 68 |
+
} else {
|
| 69 |
+
console.warn('OPENAI_API_KEY not set, skipping embeddings for', pth);
|
| 70 |
+
}
|
| 71 |
+
}
|
| 72 |
+
|
| 73 |
+
async function main() {
|
| 74 |
+
for (const rel of dirs) {
|
| 75 |
+
const abs = path.join(repoDir, rel);
|
| 76 |
+
if (!fs.existsSync(abs)) {
|
| 77 |
+
console.warn('Skip missing dir:', rel);
|
| 78 |
+
continue;
|
| 79 |
+
}
|
| 80 |
+
const entries = await fs.promises.readdir(abs, { withFileTypes: true });
|
| 81 |
+
for (const ent of entries) {
|
| 82 |
+
const full = path.join(abs, ent.name);
|
| 83 |
+
if (ent.isDirectory()) continue;
|
| 84 |
+
if (!/\.(md|markdown|json|txt)$/i.test(ent.name)) continue;
|
| 85 |
+
|
| 86 |
+
const content = await fs.promises.readFile(full, 'utf8');
|
| 87 |
+
const repoRelPath = path.relative(repoDir, full);
|
| 88 |
+
console.log('Ingest:', repoRelPath);
|
| 89 |
+
await upsertDoc(repoRelPath, content);
|
| 90 |
+
}
|
| 91 |
+
}
|
| 92 |
+
console.log('Sync complete');
|
| 93 |
+
}
|
| 94 |
+
|
| 95 |
+
main().catch(err => { console.error(err); process.exit(1); });
|
supabase/schema.sql
ADDED
|
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
-- Enable pgvector (Supabase: run as superuser/migration)
|
| 2 |
+
create extension if not exists vector;
|
| 3 |
+
|
| 4 |
+
-- Schema
|
| 5 |
+
create schema if not exists knowledge;
|
| 6 |
+
|
| 7 |
+
-- Documents table
|
| 8 |
+
create table if not exists knowledge.documents (
|
| 9 |
+
id uuid primary key default gen_random_uuid(),
|
| 10 |
+
path text unique not null,
|
| 11 |
+
title text,
|
| 12 |
+
content text,
|
| 13 |
+
source_url text,
|
| 14 |
+
hash text not null,
|
| 15 |
+
updated_at timestamptz not null default now()
|
| 16 |
+
);
|
| 17 |
+
|
| 18 |
+
-- Embeddings table (OpenAI text-embedding-3-large: 3072 dims; adjust if needed)
|
| 19 |
+
create table if not exists knowledge.embeddings (
|
| 20 |
+
doc_id uuid primary key references knowledge.documents(id) on delete cascade,
|
| 21 |
+
embedding vector(3072) not null,
|
| 22 |
+
model text default 'text-embedding-3-large'
|
| 23 |
+
);
|
| 24 |
+
|
| 25 |
+
-- Vector index (IVFFLAT)
|
| 26 |
+
create index if not exists idx_embeddings_ivfflat on knowledge.embeddings using ivfflat (embedding vector_l2_ops) with (lists = 100);
|
| 27 |
+
|
| 28 |
+
-- Helpful view
|
| 29 |
+
create or replace view knowledge.searchable as
|
| 30 |
+
select d.id, d.title, d.path, d.source_url, d.updated_at
|
| 31 |
+
from knowledge.documents d
|
| 32 |
+
order by d.updated_at desc;
|
workflows/examples/knowledge-sync-blueprint.json
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"name": "Knowledge Sync (Webhook \u2192 Postgres)",
|
| 3 |
+
"nodes": [
|
| 4 |
+
{
|
| 5 |
+
"parameters": {},
|
| 6 |
+
"id": "Start",
|
| 7 |
+
"name": "Webhook Trigger",
|
| 8 |
+
"type": "n8n-nodes-base.webhook",
|
| 9 |
+
"typeVersion": 1,
|
| 10 |
+
"position": [
|
| 11 |
+
-460,
|
| 12 |
+
0
|
| 13 |
+
],
|
| 14 |
+
"webhookId": "replace-with-your-id",
|
| 15 |
+
"path": "knowledge-sync",
|
| 16 |
+
"httpMethod": "POST",
|
| 17 |
+
"responseMode": "onReceived",
|
| 18 |
+
"responseData": "allEntries"
|
| 19 |
+
},
|
| 20 |
+
{
|
| 21 |
+
"parameters": {
|
| 22 |
+
"functionCode": "items = items.map(item => ({ json: {\n path: item.json.path,\n title: item.json.title || item.json.path.split('/').pop(),\n content: item.json.content,\n hash: item.json.hash\n}}));\nreturn items;"
|
| 23 |
+
},
|
| 24 |
+
"id": "Map",
|
| 25 |
+
"name": "Map Fields",
|
| 26 |
+
"type": "n8n-nodes-base.function",
|
| 27 |
+
"typeVersion": 2,
|
| 28 |
+
"position": [
|
| 29 |
+
-200,
|
| 30 |
+
0
|
| 31 |
+
]
|
| 32 |
+
},
|
| 33 |
+
{
|
| 34 |
+
"parameters": {
|
| 35 |
+
"operation": "executeQuery",
|
| 36 |
+
"query": "INSERT INTO knowledge.documents(path, title, content, hash)\nVALUES(:path, :title, :content, :hash)\nON CONFLICT (path) DO UPDATE SET\n title=EXCLUDED.title,\n content=EXCLUDED.content,\n hash=EXCLUDED.hash,\n updated_at=now()\nRETURNING id;"
|
| 37 |
+
},
|
| 38 |
+
"id": "Postgres",
|
| 39 |
+
"name": "Postgres Upsert",
|
| 40 |
+
"type": "n8n-nodes-base.postgres",
|
| 41 |
+
"typeVersion": 1,
|
| 42 |
+
"position": [
|
| 43 |
+
60,
|
| 44 |
+
0
|
| 45 |
+
],
|
| 46 |
+
"credentials": {
|
| 47 |
+
"postgres": "REPLACE_WITH_YOUR_CREDENTIAL"
|
| 48 |
+
}
|
| 49 |
+
}
|
| 50 |
+
],
|
| 51 |
+
"connections": {
|
| 52 |
+
"Webhook Trigger": {
|
| 53 |
+
"main": [
|
| 54 |
+
[
|
| 55 |
+
{
|
| 56 |
+
"node": "Map Fields",
|
| 57 |
+
"type": "main",
|
| 58 |
+
"index": 0
|
| 59 |
+
}
|
| 60 |
+
]
|
| 61 |
+
]
|
| 62 |
+
},
|
| 63 |
+
"Map Fields": {
|
| 64 |
+
"main": [
|
| 65 |
+
[
|
| 66 |
+
{
|
| 67 |
+
"node": "Postgres Upsert",
|
| 68 |
+
"type": "main",
|
| 69 |
+
"index": 0
|
| 70 |
+
}
|
| 71 |
+
]
|
| 72 |
+
]
|
| 73 |
+
}
|
| 74 |
+
},
|
| 75 |
+
"active": false
|
| 76 |
+
}
|