The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'messages'}) and 3 missing columns ({'output', 'input', 'instruction'}).
This happened while the json dataset builder was generating data using
hf://datasets/MustaphaL/training_data_n8n_workflows/training_data_openai.jsonl (at revision 6f668fd762d1bda93152a549425b1ecfe506d160)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
messages: list<item: struct<role: string, content: string>>
child 0, item: struct<role: string, content: string>
child 0, role: string
child 1, content: string
to
{'instruction': Value('string'), 'input': Value('string'), 'output': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1339, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 972, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'messages'}) and 3 missing columns ({'output', 'input', 'instruction'}).
This happened while the json dataset builder was generating data using
hf://datasets/MustaphaL/training_data_n8n_workflows/training_data_openai.jsonl (at revision 6f668fd762d1bda93152a549425b1ecfe506d160)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
instruction string | input string | output string |
|---|---|---|
Create an n8n workflow for: Build Your First AI Agent
Description: How it works This template launches your very first AI Agent —an AI-powered chatbot that can do more than just talk— it can take action using tools. Think of an AI Agent as a smart assistant, and the tools are the apps on its phone. By connecting it to other nodes, you give your agent the ability to interact with real-world data and services, like checking the weather, fetching news, or even sending emails on your behalf. This workflow is designed to be the perfect starting point: The Chat Interface:** A Chat Trigger node provides a simple, clean interface for you to talk to your agent. The Brains:** The AI Agent node receives your messages, intelligently decides which tool to use (if any), and formulates a helpful response. Its personality and instructions are fully customizable in the "System Message". The Language Model:* It uses *Google Gemini** to power its reasoning and conversation skills. The Tools:** It comes pre-equipped with two tools to demonstrate its capabilities: Get Weather: Fetches real-time weather forecasts. Get News: Reads any RSS feed to get the latest headlines. The Memory:** A Conversation Memory node allows the agent to remember the last few messages, enabling natural, follow-up conversations. Set up steps Setup time: ~2 minutes You only need one thing to get started: a free Google AI API key. Get Your Google AI API Key: Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key that appears. Add Your Credential in n8n: On the workflow canvas, go to the Connect your model (Google Gemini) node. Click the Credential dropdown and select + Create New Credential. Paste your API key into the API Key field and click Save. Start Chatting! Go to the Example Chat node. Click the "Open Chat" button in its parameter panel. Try asking it one of the example questions, like: "What's the weather in Paris?" or "Get me the latest tech news." That's it! You now have a fully functional AI Agent. Try adding more tools (like Gmail or Google Calendar) to make it even more powerful. | {
"nodes": [
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 3,
"node_types": [
"AI Agent",
"Simple Memory",
"Google Gemini Chat Model"
]
} | |
Create an n8n workflow for: 🤖 Build an Interactive AI Agent with Chat Interface and Multiple Tools
Description: How it works This template is a complete, hands-on tutorial that lets you build and interact with your very first AI Agent. Think of an AI Agent as a standard AI chatbot with superpowers. The agent doesn't just talk; it can use tools to perform actions and find information in real-time. This workflow is designed to show you exactly how that works. The Chat Interface (Chat Trigger): This is your window to the agent. It's a fully styled, public-facing chat window where you can have a conversation. The Brain (AI Agent Node): This is the core of the operation. It takes your message, understands your intent, and intelligently decides which "superpower" (or tool) it needs to use to answer your request. The agent's personality and instructions are defined in its extensive system prompt. The Tools (Tool Nodes): These are the agent's superpowers. We've included a variety of useful and fun tools to showcase its capabilities: Get a random joke. Search Wikipedia for a summary of any topic. Calculate a future date. Generate a secure password. Calculate a monthly loan payment. Fetch the latest articles from the n8n blog. The Memory (Memory Node): This gives the agent a short-term memory, allowing it to remember the last few messages in your conversation for better context. When you send a message, the agent's brain analyzes it, picks the right tool for the job, executes it, and then formulates a helpful response based on the tool's output. Set up steps Setup time: ~3 minutes This template is nearly ready to go out of the box. You just need to provide the AI's "brain." Configure Credentials: This workflow requires an API key for an AI model. Make sure you have credentials set up in your n8n instance for either Google AI (Gemini) or OpenAI. Choose Your AI Brain (LLM): By default, the workflow uses the Google Gemini node. If you have Google AI credentials, you're all set! If you prefer to use OpenAI, simply disable the Gemini node and enable the OpenAI node. You only need one active LLM node. Make sure it is connected to the Agent parent node. Explore the Tools: Take a moment to look at the different tool nodes connected to the Your First AI Agent node. This is where the agent gets its abilities! You can add, remove, or modify these to create your own custom agent. Activate and Test! Activate the workflow. Open the public URL for the Example Chat Window node (you can copy it from the node's panel). Start chatting! Try asking it things like: "Tell me a joke." "What is n8n?" "Generate a 16-character password for me." "What are the latest posts on the n8n blog?" "What is the monthly payment for a $300,000 loan at 5% interest over 30 years?" | {
"nodes": [
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.toolCode",
"type": "Code Tool",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.toolWikipedia",
"type": "Wikipedia",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 6,
"node_types": [
"AI Agent",
"OpenAI Chat Model",
"Simple Memory",
"Code Tool",
"Wikipedia",
"Google Gemini Chat Model"
]
} | |
Create an n8n workflow for: Auto-Create TikTok Videos with VEED.io AI Avatars, ElevenLabs & GPT-4
Description: 💥 Viral TikTok Video Machine: Auto-Create Videos with Your AI Avatar 🎯 Who is this for? This workflow is for content creators, marketers, and agencies who want to use Veed.io’s AI avatar technology to produce short, engaging TikTok videos automatically. It’s ideal for creators who want to appear on camera without recording themselves, and for teams managing multiple brands who need to generate videos at scale. ⚙️ What problem this workflow solves Manually creating videos for TikTok can take hours — finding trends, writing scripts, recording, and editing. By combining Veed.io, ElevenLabs, and GPT-4, this workflow transforms a simple Telegram input into a ready-to-post TikTok video featuring your AI avatar powered by Veed.io — speaking naturally with your cloned voice. 🚀 What this workflow does This automation links Veed.io’s video-generation API with multiple AI tools: Analyzes TikTok trends via Perplexity AI Writes a 10-second viral script using GPT-4 Generates your voiceover via ElevenLabs Uses Veed.io (Fabric 1.0 via FAL.ai) to animate your avatar and sync the lips to the voice Creates an engaging caption + hashtags for TikTok virality Publishes the video automatically via Blotato TikTok API Logs all results to Google Sheets for tracking 🧩 Setup Telegram Bot Create your bot via @BotFather Configure it as the trigger for sending your photo and theme Connect Veed.io Create an account on Veed.io Get your FAL.ai API key (Veed Fabric 1.0 model) Use HTTPS image/audio URLs compatible with Veed Fabric Other APIs Add Perplexity, ElevenLabs, and Blotato TikTok keys Connect your Google Sheet for logging results 🛠️ How to customize this workflow Change your Avatar:* Upload a new image through Telegram, and *Veed.io** will generate a new talking version automatically. Modify the Script Style:** Adjust the GPT prompt for tone (educational, funny, storytelling). Adjust Voice Tone:* Tweak *ElevenLabs** stability and similarity settings. Expand Platforms:** Add Instagram, YouTube Shorts, or X (Twitter) posting nodes. Track Performance:** Customize your Google Sheet to measure your most successful Veed.io-based videos. 🧠 Expected Outcome In just a few seconds after sending your photo and theme, this workflow — powered by Veed.io — creates a fully automated TikTok video featuring your AI avatar with natural lip-sync and voice. The result is a continuous stream of viral short videos, made without cameras, editing, or effort. ✅ Import the JSON file in n8n, add your API keys (including Veed.io via FAL.ai), and start generating viral TikTok videos starring your AI avatar today! 🎥 Watch This Tutorial 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.telegram",
"type": "Telegram",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
},
{
"name": "n8n-nodes-base.perplexity",
"type": "Perplexity",
"category": [
"Utility"
]
}
],
"node_count": 6,
"node_types": [
"Google Sheets",
"HTTP Request",
"Telegram",
"Code",
"OpenAI",
"Perplexity"
]
} | |
Create an n8n workflow for: Talk to Your Google Sheets Using ChatGPT-5
Description: This n8n workflow template creates an intelligent data analysis chatbot that can answer questions about data stored in Google Sheets using OpenAI's GPT-5 Mini model. The system automatically analyzes your spreadsheet data and provides insights through natural language conversations. What This Workflow Does Chat Interface**: Provides a conversational interface for asking questions about your data Smart Data Analysis**: Uses AI to understand column structures and data relationships Google Sheets Integration**: Connects directly to your Google Sheets data Memory Buffer**: Maintains conversation context for follow-up questions Automated Column Detection**: Automatically identifies and describes your data columns 🚀 Try It Out! 1. Set Up OpenAI Connection Get Your API Key Visit the OpenAI API Keys page. Go to OpenAI Billing. Add funds to your billing account. Copy your API key into your OpenAI credentials in n8n (or your chosen platform). 2. Prepare Your Google Sheet Connect Your Data in Google Sheets Data must follow this format: Sample Marketing Data First row** contains column names. Data should be in rows 2–100. Log in using OAuth, then select your workbook and sheet. 3. Ask Questions of Your Data You can ask natural language questions to analyze your marketing data, such as: Total spend** across all campaigns. Spend for Paid Search only**. Month-over-month changes** in ad spend. Top-performing campaigns** by conversion rate. Cost per lead** for each channel. 📬 Need Help or Want to Customize This? 📧 🔗 LinkedIn 🔗 n8n Automation Experts | {
"nodes": [
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 3,
"node_types": [
"AI Agent",
"OpenAI Chat Model",
"Simple Memory"
]
} | |
Create an n8n workflow for: Generate & Auto-post AI Videos to Social Media with Veo3 and Blotato
Description: Automate video creation with Veo3 and auto-post to Instagram, TikTok via Blotato Who is this for? This template is ideal for content creators, social media managers, YouTubers, and digital marketers who want to generate high-quality videos daily using AI and distribute them effortlessly across multiple platforms. It’s perfect for anyone who wants to scale short-form content creation without video editing tools. What problem is this workflow solving? Creating and distributing consistent video content requires: Generating ideas Writing scripts and prompts Rendering videos Manually posting to platforms This workflow automates all of that. It transforms one prompt into a professional AI-generated video and publishes it automatically — saving time and increasing reach. What this workflow does Triggers daily to generate a new idea with OpenAI (or your custom prompt). Creates a video prompt formatted specifically for Google Veo3. Generates a cinematic video using the Veo3 API. Logs the video data into a Google Sheet. Retrieves the final video URL once Veo3 finishes rendering. Uploads the video to Blotato for publishing. Auto-posts the video to Instagram, TikTok, YouTube, Facebook, LinkedIn, Threads, Twitter (X), Pinterest, and Bluesky. Setup Add your OpenAI API key to the GPT-4.1 nodes. Connect your Veo3 API credentials in the video generation node. Link your Google Sheets account and use a sheet with columns: Prompt, Video URL, Status. Connect your Blotato API key and set your platform IDs in the Assign Social Media IDs node. Adjust the Schedule Trigger to your desired posting frequency. How to customize this workflow to your needs Edit the AI prompt** to align with your niche (fitness, finance, education, etc.). Add your own branding overlays** using JSON2Video or similar tools. Change platform selection** by enabling/disabling specific HTTP Request nodes. Add a Telegram step** to preview the video before auto-posting. Track performance** by adding metrics columns in Google Sheets. 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.toolThink",
"type": "Think Tool",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 6,
"node_types": [
"Google Sheets",
"HTTP Request",
"AI Agent",
"OpenAI Chat Model",
"Structured Output Parser",
"Think Tool"
]
} | |
Create an n8n workflow for: Jarvis: Productivity AI Agent for Tasks, Calendar, Email & Expense using MCPs
Description: Who’s it for This template is designed for anyone who wants to use Telegram as a personal AI assistant hub. If you often juggle tasks, emails, calendars, and expenses across multiple tools, this workflow consolidates everything into one seamless AI-powered agent. What it does Jarvis listens to your Telegram messages (text or audio) and processes them with OpenAI. Based on your request, it can: ✅ Manage tasks (create, complete, or delete) 📅 Handle calendar events (schedule, reschedule, or check availability) 📧 Send, draft, or fetch emails with Gmail 👥 Retrieve Google Contacts 💵 Log and track expenses All responses are returned directly to Telegram, giving you a unified command center. How to set up Clone this template into your n8n workspace. Connect your accounts (Telegram, Gmail, Google Calendar, Contacts, etc.). Add your OpenAI API key in the Credentials section. Test by sending a Telegram message like “Create a meeting tomorrow at 3pm” or “Add expense $50 for lunch.” or "Draft a reply with a project proposal to that email from Steve" Requirements n8n instance (cloud or self-hosted) Telegram Bot API credentials Gmail, Google Calendar, and Google Contacts credentials (optional, if using those features) OpenAI API key ElevenLabs API Key (optional, if you need audio note support) How to customize Swap Gmail with another email provider by replacing the Gmail MCP node. Add additional MCP integrations (e.g., Notion, Slack, CRM tools). Adjust memory length to control how much context Jarvis remembers. With this template, you can transform Telegram into your all-in-one AI assistant, simplifying workflows and saving hours every week. | {
"nodes": [
{
"name": "n8n-nodes-base.telegram",
"type": "Telegram",
"category": [
"Communication",
"HITL"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.toolThink",
"type": "Think Tool",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.mcpClientTool",
"type": "MCP Client Tool",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 6,
"node_types": [
"Telegram",
"AI Agent",
"OpenAI Chat Model",
"Simple Memory",
"Think Tool",
"MCP Client Tool"
]
} | |
Create an n8n workflow for: N8N Documentation Expert Chatbot with OpenAI RAG Pipeline
Description: How It Works This template is a complete, hands-on tutorial for building a RAG (Retrieval-Augmented Generation) pipeline. In simple terms, you'll teach an AI to become an expert on a specific topic—in this case, the official n8n documentation—and then build a chatbot to ask it questions. Think of it like this: instead of a general-knowledge AI, you're building an expert librarian. 🔧 Workflow Overview The workflow is split into two main parts: Part 1: Indexing the Knowledge (📚 Building the Library) This is a one-time process you run manually. The workflow will: Automatically scrape all pages of the n8n documentation. Break them down into small, digestible chunks. Use an AI model to create a numerical representation (an embedding) for each chunk. Store these embeddings in n8n's built-in Simple Vector Store. > This is like a librarian reading every book and creating a hyper-detailed index card for every paragraph. > ⚠️ Important: This in-memory knowledge base is temporary. It will be erased if you restart your n8n instance. You'll need to run the indexing process again in that case. Part 2: The AI Agent (🧠 The Expert Librarian) This is the chat interface. When you ask a question: The AI agent doesn't guess the answer. It searches the knowledge base to find the most relevant “index cards” (chunks). It feeds those chunks to a language model (Gemini) with strict instructions: > “Answer the user's question using ONLY this information.” This ensures answers are accurate, factual, and grounded in your documents. 🚀 Setup Steps > Total setup time: ~2 minutes > Indexing time: ~15–20 minutes This template uses n8n’s built-in tools, so no external database is needed. 1. Configure OpenAI Credentials You’ll need an OpenAI API key (for GPT models). In your n8n workflow: Go to any of the three OpenAI nodes (e.g., OpenAI Chat Model). Click the Credential dropdown → + Create New Credential. Enter your OpenAI API key and save. 2. Apply Credentials to All Nodes Your new credential is now saved. Go to the other two OpenAI nodes (e.g., OpenAI Embeddings) and select the newly created credential from the dropdown. 3. Build the Knowledge Base Find the Start Indexing manual trigger node (top-left of the workflow). Click the Execute Workflow button to start indexing. > ⚠️ Be patient: This takes 15–20 minutes to scrape and process the full documentation. > You only need to do this once per n8n session. 4. Chat With Your Expert Agent After indexing completes, activate the entire workflow (toggle at the top). Open the RAG Chatbot chat trigger node (bottom-left). Copy its Public URL. Open it in a new tab and ask questions about n8n! Example questions: "How does the IF node work?" "What is a sub-workflow?" 👤 Credits All credits go to Lucas Peyrin 🔗 lucaspeyrin on n8n.io | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.html",
"type": "HTML",
"category": [
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
"type": "Embeddings OpenAI",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
"type": "Recursive Character Text Splitter",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.vectorStoreInMemory",
"type": "Simple Vector Store",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
"type": "Default Data Loader",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 9,
"node_types": [
"HTTP Request",
"HTML",
"AI Agent",
"Embeddings OpenAI",
"OpenAI Chat Model",
"Simple Memory",
"Recursive Character Text Splitter",
"Simple Vector Store",
"Default Data Loader"
]
} | |
Create an n8n workflow for: Nutrition Tracker & Meal Logger with Telegram, Gemini AI and Google Sheets
Description: 🤖🥗 Telegram Nutrition AI Assistant (Alternative to Cal AI App) > AI-powered nutrition assistant for Telegram — log meals, set goals, and get personalized daily reports with Google Sheets integration. 📋 Description This n8n template creates a Telegram-based Nutrition AI Assistant 🥑🔥 designed as an open-source alternative to the Cal AI mobile app. It allows users to interact with an AI agent via text, voice, or images to track meals, calculate macros, and monitor nutrition goals directly from Telegram. The system integrates Google Sheets as the database, handling both user profiles and meal logs, while leveraging Gemini AI for natural conversation, food recognition, and daily progress reports. ✨ Key Features 💬 Multi-input support: Text, voice messages (transcribed), and food images (AI analysis). 📊 Macro calculation: Automatic estimation of calories, proteins, carbs, and fats. 📝 User-friendly registration: Simple onboarding without storing personal health data (no weight/height required). 🎯 Goal tracking: Users can set and update calorie and protein targets. 📈 Daily reports: Personalized progress messages with visual progress bars. 🗂 Google Sheets integration: Profile table for user targets. Meals table for food logs. 🔄 Advanced n8n nodes: Includes use of Merge, Subworkflow, and Code nodes for data processing and report generation. 💡 Acknowledgment Inspired by the Cal AI concept 💡 — this template demonstrates how to reproduce its main functionality with n8n, Telegram, and AI agents as a flexible, open-source automation workflow. 🏷 Tags telegram ai-assistant nutrition meal-tracking google-sheets food-logging voice-transcription image-analysis daily-reports n8n-template merge-node subworkflow-node code-node telegram-trigger google-gemini 💼 Use Case Use this template if you want to: 🥗 Log meals using text, images, or voice messages. 📊 Track nutrition goals (calories, proteins) with daily progress updates. 🤖 Provide a chat-based nutrition assistant without building a full app. 🗂 Store structured nutrition data in Google Sheets for easy access and analysis. 💬 Example User Interactions 📸 User sends a photo of a meal → AI analyzes the food and logs calories/macros. 🎤 User sends a voice message → AI transcribes and logs the meal. ⌨️ User types “report” → AI returns a daily nutrition summary with progress bars. 🥅 User says “update my protein goal” → AI updates profile in Google Sheets. 🔑 Required Credentials Telegram Bot API (Bot Token) Google Sheets API credentials AI Provider API (Google Gemini or compatible LLM) ⚙️ Setup Instructions 🗂 Create two Google Sheets tables: Profile: User_ID, Name, Calories_target, Protein_target Meals: User_ID, Date, Meal_description, Calories, Proteins, Carbs, Fats 🔌 Configure the Telegram Trigger with your bot token. 🤖 Connect your AI provider credentials (Gemini recommended). 📑 Connect Google Sheets with your credentials. ▶️ Deploy the workflow in n8n. 🎯 Start interacting with your nutrition assistant via Telegram. 📌 Extra Notes 🟩 Green section: Handles Telegram trigger and user check. 🟥 Red section: Registers new users and sets goals. 🟦 Blue section: Processes text, voice, and images. 🟨 Yellow section: Generates nutrition reports. 🟪 Purple section: Main AI agent controlling tools and logic. 💡 Need Assistance? If you’d like help customizing or extending this workflow, feel free to reach out: 📧 Email: 🔗 LinkedIn: John Alejandro Silva Rodríguez | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.telegram",
"type": "Telegram",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.toolWorkflow",
"type": "Call n8n Workflow Tool",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.googleGemini",
"type": "Google Gemini",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 8,
"node_types": [
"Google Sheets",
"Telegram",
"Code",
"AI Agent",
"Simple Memory",
"Call n8n Workflow Tool",
"Google Gemini Chat Model",
"Google Gemini"
]
} | |
Create an n8n workflow for: Local Chatbot with Retrieval Augmented Generation (RAG)
Description: Build a 100% local RAG with n8n, Ollama and Qdrant. This agent uses a semantic database (Qdrant) to answer questions about PDF files. Tutorial Click here to view the YouTube Tutorial How it works Build a chatbot that answers based on documents you provide it (Retrieval Augmented Generation). You can upload as many PDF files as you want to the Qdrant database. The chatbot will use its retrieval tool to fetch the chunks and use them to answer questions. Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model. How to use it First run the "Data Ingestion" part and upload as many PDF files as you want Run the Chatbot and start asking questions about the documents you uploaded | {
"nodes": [
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOllama",
"type": "Ollama Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
"type": "Recursive Character Text Splitter",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
"type": "Default Data Loader",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.vectorStoreQdrant",
"type": "Qdrant Vector Store",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.embeddingsOllama",
"type": "Embeddings Ollama",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 7,
"node_types": [
"AI Agent",
"Ollama Chat Model",
"Simple Memory",
"Recursive Character Text Splitter",
"Default Data Loader",
"Qdrant Vector Store",
"Embeddings Ollama"
]
} | |
Create an n8n workflow for: Create & Upload AI-Generated ASMR YouTube Shorts with Seedance, Fal AI, and GPT-4
Description: //ASMR AI Workflow Who is this for? Content Creators, YouTube Automation Enthusiasts, and AI Hobbyists looking to autonomously generate and publish unique, satisfying ASMR-style YouTube Shorts without manual effort. What problem does this solve? This workflow solves the creative bottleneck and time-consuming nature of daily content creation. It fully automates the entire production pipeline, from brainstorming trendy ideas to publishing a finished video, turning your n8n instance into a 24/7 content factory. What this workflow does 1. Two-Stage AI Ideation & Planning: Uses an initial AI agent to brainstorm a short, viral ASMR concept based on current trends. A second "Planning" AI agent then takes this concept and expands it into a detailed, structured production plan, complete with a viral-optimized caption, hashtags, and descriptions for the environment and sound. 2. Multi-Modal Asset Generation: Video:* Feeds detailed scene prompts to the *ByteDance Seedance** text-to-video model (via Wavespeed AI) to generate high-quality video clips. Audio:* Simultaneously calls the *Fal AI** text-to-audio model to create custom, soothing ASMR sound effects that match the video's theme. Assembly:** Automatically sequences the video clips and sound into a single, cohesive final video file using an FFMPEG API call. 3. Closed-Loop Publishing & Logging: Logging:** Initially logs the new idea to a Google Sheet with a status of "In Progress". Publishing:** Automatically uploads the final, assembled video directly to your YouTube channel, setting the title and description from the AI's plan. Updating:** Finds the original row in the Google Sheet and updates its status to "Done", adding a direct link to the newly published YouTube video. Notifications:** Sends real-time alerts to Telegram and/or Gmail with the video title and link, confirming the successful publication. Setup Credentials: You will need to create credentials in your n8n instance for the following services: OpenAI API Wavespeed AI API (for Seedance) Fal AI API Google OAuth Credential (enable YouTube Data API v3 and Google Sheets API in your Google Cloud Project) Telegram Bot Credential (Optional) Gmail OAuth Credential Configuration: This is an advanced workflow. The initial setup should take approximately 15-20 minutes. Google Sheet:* Create a Google Sheet with these columns: idea, caption, production_status, youtube_url. Add the *Sheet ID** to the Google Sheets nodes in the workflow. Node Configuration:** In the Telegram Notification node, enter your own Chat ID. In the Gmail Notification node, update the recipient email address. Activate:** Once configured, save and set the workflow to "Active" to let it run on its schedule. How to customize Creative Direction:* To change the style or theme of the videos (e.g., from kinetic sand to soap cutting), simply edit the systemMessage in the *"2. Enrich Idea into Plan"* and *"Prompts AI Agent"** nodes. Initial Ideas:* To influence the AI's starting concepts, modify the prompt in the *"1. Generate Trendy Idea"** node. Video & Sound:* To change the video duration or sound style, adjust the parameters in the *"Create Clips"* and *"Create Sounds"** nodes. Notifications:* Add or remove notification channels (like Slack or Discord) after the *"Upload to YouTube"** node. | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.telegram",
"type": "Telegram",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.gmail",
"type": "Gmail",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.youTube",
"type": "YouTube",
"category": [
"Marketing"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.toolThink",
"type": "Think Tool",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 10,
"node_types": [
"Google Sheets",
"HTTP Request",
"Telegram",
"Gmail",
"YouTube",
"Code",
"AI Agent",
"OpenAI Chat Model",
"Structured Output Parser",
"Think Tool"
]
} | |
Create an n8n workflow for: Receipt Scanning & Analysis Workflow
Description: How it works: Automatically detects when a new receipt is uploaded to Google Drive. Extracts text from the receipt using OCR. Uses an AI Agent to analyze the extracted data and structure it (e.g., vendor, date, total, tax). Saves the organized receipt data into a Google Sheet for easy tracking. Set up steps: Setup takes around 15–20 minutes. You'll need a Google Drive folder for receipts and a Google Sheet to store results. Configure your Google Drive Trigger, OCR extraction, AI Agent, and Google Sheets connection. Detailed instructions and explanations are included in this n8n Starter Session tutorial series. | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "n8n-nodes-base.mistralAi",
"type": "Mistral AI",
"category": [
"Utility"
]
}
],
"node_count": 6,
"node_types": [
"Google Sheets",
"HTTP Request",
"AI Agent",
"OpenAI Chat Model",
"Structured Output Parser",
"Mistral AI"
]
} | |
Create an n8n workflow for: Track SEO Keyword Rankings with Bright Data MCP and GPT-4o AI Analysis
Description: This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically monitors keyword rankings across search engines to track SEO performance and identify optimization opportunities. It saves you time by eliminating the need to manually check keyword positions and provides comprehensive ranking data for strategic SEO decision making. Overview This workflow automatically scrapes search engine results pages (SERPs) to track keyword rankings, competitor positions, and search features. It uses Bright Data to access search results without restrictions and AI to intelligently parse ranking data, track changes, and identify SEO opportunities. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping search engine results without being blocked OpenAI**: AI agent for intelligent ranking analysis and SEO insights Google Sheets**: For storing keyword ranking data and tracking changes How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your ranking tracking spreadsheet Customize: Define target keywords and ranking monitoring parameters Use Cases SEO Teams**: Track keyword performance and identify ranking improvements Content Marketing**: Monitor content ranking success and optimization needs Competitive Analysis**: Track competitor keyword rankings and strategies Digital Marketing**: Measure organic search performance and ROI Connect with Me Website**: YouTube**: LinkedIn**: Get Bright Data**: (Using this link supports my free workflows with a small commission) #n8n #automation #keywordrankings #seo #searchrankings #brightdata #webscraping #seotools #n8nworkflow #workflow #nocode #ranktracking #keywordmonitoring #seoautomation #searchmarketing #organicseo #seoresearch #rankinganalysis #keywordanalysis #searchengines #seomonitoring #digitalmarketing #serp #keywordtracking #seoanalytics #searchoptimization #rankingreports #keywordresearch #seoinsights #searchperformance | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserAutofixing",
"type": "Auto-fixing Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 6,
"node_types": [
"Google Sheets",
"Code",
"AI Agent",
"OpenAI Chat Model",
"Auto-fixing Output Parser",
"Structured Output Parser"
]
} | |
Create an n8n workflow for: Generate AI Videos with Google Veo3, Save to Google Drive and Upload to YouTube
Description: This workflow allows users to generate AI videos using Google Veo3, save them to Google Drive, generate optimized YouTube titles with GPT-4o, and automatically upload them to YouTube with Upload-Post. The entire process is triggered from a Google Sheet that acts as the central interface for input and output. IT automates video creation, uploading, and tracking, ensuring seamless integration between Google Sheets, Google Drive, Google Veo3, and YouTube. Benefits of this Workflow 💡 No Code Interface**: Trigger and control the video production pipeline from a simple Google Sheet. ⚙️ Full Automation**: Once set up, the entire video generation and publishing process runs hands-free. 🧠 AI-Powered Creativity**: Generates engaging YouTube titles using GPT-4o. Leverages advanced generative video AI from Google Veo3. 📁 Cloud Storage & Backup**: Stores all generated videos on Google Drive for safekeeping. 📈 YouTube Ready**: Automatically uploads to YouTube with correct metadata, saving time and boosting visibility. 🧪 Scalable**: Designed to process multiple video prompts by looping through new entries in Google Sheets. 🔒 API-First**: Utilizes secure API-based communication for all services. How It Works Trigger: The workflow can be started manually ("When clicking ‘Test workflow’") or scheduled ("Schedule Trigger") to run at regular intervals (e.g., every 5 minutes). Fetch Data: The "Get new video" node retrieves unfilled video requests from a Google Sheet (rows where the "VIDEO" column is empty). Video Creation: The "Set data" node formats the prompt and duration from the Google Sheet. The "Create Video" node sends a request to the Fal.run API (Google Veo3) to generate a video based on the prompt. Status Check: The "Wait 60 sec." node pauses execution for 60 seconds. The "Get status" node checks the video generation status. If the status is "COMPLETED," the workflow proceeds; otherwise, it waits again. Video Processing: The "Get Url Video" node fetches the video URL. The "Generate title" node uses OpenAI (GPT-4.1) to create an SEO-optimized YouTube title. The "Get File Video" node downloads the video file. Upload & Update: The "Upload Video" node saves the video to Google Drive. The "HTTP Request" node uploads the video to YouTube via the Upload-Post API. The "Update Youtube URL" and "Update result" nodes update the Google Sheet with the video URL and YouTube link. Set Up Steps Google Sheet Setup: Create a Google Sheet with columns: PROMPT, DURATION, VIDEO, and YOUTUBE_URL. Share the Sheet link in the "Get new video" node. API Keys: Obtain a Fal.run API key (for Veo3) and set it in the "Create Video" node (Header: Authorization: Key YOURAPIKEY). Get an Upload-Post API key (for YouTube uploads) and configure the "HTTP Request" node (Header: Authorization: Apikey YOUR_API_KEY). YouTube Upload Configuration: Replace YOUR_USERNAME in the "HTTP Request" node with your Upload-Post profile name. Schedule Trigger: Configure the "Schedule Trigger" node to run periodically (e.g., every 5 minutes). Need help customizing? Contact me for consulting and support or add me on Linkedin. | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.googleDrive",
"type": "Google Drive",
"category": [
"Data & Storage"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 4,
"node_types": [
"Google Sheets",
"HTTP Request",
"Google Drive",
"OpenAI"
]
} | |
Create an n8n workflow for: 🎓 Learn n8n Expressions with an Interactive Step-by-Step Tutorial for Beginners
Description: How it works This template is an interactive, step-by-step tutorial designed to teach you the most important skill in n8n: using expressions to access and manipulate data. If you know what JSON is but aren't sure how to pull a specific piece of information from one node and use it in another, this workflow is for you. It starts with a single "Source Data" node that acts as our filing cabinet, and then walks you through a series of lessons, each demonstrating a new technique for retrieving and transforming that data. You will learn how to: Access a simple value from a previous node. Use n8n's built-in selectors like .last() and .first(). Get a specific item from a list (Array). Drill down into nested data (Objects). Combine these techniques to access data in an array of objects. Go beyond simple retrieval by using JavaScript functions to do math or change text. Inspect data with utility functions like Object.keys() and JSON.stringify(). Summarize data from multiple items using .all() and arrow functions. Set up steps Setup time: 0 minutes! This workflow is a self-contained tutorial and requires no setup or external credentials. Click "Execute Workflow" to run the entire tutorial. Follow the flow from the "Source Data" node to the "Final Exam" node. For each lesson, click on the node to see how its expressions are configured in the parameters panel. Read the detailed sticky note next to each lesson—it breaks down exactly how the expression works and why. By the end, you'll have the foundational knowledge to connect data and build powerful, dynamic workflows in n8n. | {
"nodes": [],
"node_count": 0,
"node_types": []
} | |
Create an n8n workflow for: Transform Old Photos into Animated Videos with FLUX & Kling AI for Social Media
Description: This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description See the transformation in action! Here's an example of what this workflow can achieve: This automation template is designed for content creators, social media managers, and anyone looking to breathe new life into old family photos and historical images. It transforms any old black and white or sepia photograph into a colorized, animated video using cutting-edge AI technology, then automatically publishes the results across multiple social media platforms including Facebook, Instagram, YouTube, and X (Twitter). The workflow combines powerful AI services to create engaging content from vintage photographs: first enhancing and colorizing the image using FLUX Kontext, then bringing it to life with realistic animations using Kling Video AI, and finally distributing the results across your social media channels automatically. Note: The estimated cost per workflow execution is approximately $0.29 USD, covering the AI processing for both image colorization and video animation. The upload-post node only works for self-hosted n8n instances, but you can use the standard HTTP request node for uploading content on n8n Cloud.* Who Is This For? Content Creators & Social Media Managers:** Transform historical content into engaging videos that capture audience attention and drive engagement across platforms. Family History Enthusiasts:** Bring old family photos to life by adding color and motion, creating emotional connections with your audience. Marketing Professionals:** Leverage nostalgic content for brand storytelling, using vintage aesthetics to create compelling social media campaigns. Digital Artists & Photo Restorers:** Streamline the process of enhancing and sharing restored vintage photographs with automated AI enhancement. Social Media Influencers:** Create unique, eye-catching content from historical images that stands out in crowded social feeds. What Problem Does This Workflow Solve? Creating engaging social media content from old photos typically requires multiple manual steps: photo restoration, colorization, animation, and then individual posting to each platform. This workflow addresses these challenges by: Automating Photo Enhancement:** Uses advanced AI (FLUX Kontext) to automatically colorize and enhance old photographs, removing artifacts and improving quality. Creating Dynamic Content:** Transforms static images into animated videos using Kling Video AI, making historical photos come alive with natural movements. Streamlining Multi-Platform Publishing:** Automatically distributes the final animated videos across Facebook, Instagram, YouTube, and X with a single workflow execution. Saving Time & Effort:** Eliminates the need for manual photo editing, video creation, and individual social media posting. How It Works Photo Upload: Users submit old photographs through a simple web form, with optional custom animation descriptions. Image Enhancement: The workflow uploads the photo to imgbb, then sends it to FLUX Kontext AI for colorization and quality enhancement. Animation Creation: The colorized image is processed by Kling Video AI to create a 5-second animated video with natural movements. Cloud Storage: The final video is automatically saved to Google Drive for backup and easy access. Multi-Platform Publishing: The animated video is simultaneously posted to Facebook, Instagram, YouTube, and X using the upload-post service. Setup FAL.AI API Key: Sign up at fal.ai and add your API key to the HTTP Request nodes for both FLUX Kontext and Kling Video AI services. ImgBB API Token: Create a free account at api.imgbb.com to get an API token for image hosting, then update the "Upload Image to imgbb" node. Google Drive Connection: Connect your Google Drive account to enable automatic video storage and backup. Upload-Post Service: Create an account at upload-post.com to get your API credentials for multi-platform social media posting. Important: The upload-post node currently only works with self-hosted n8n instances. For n8n Cloud users, replace the upload-post node with standard HTTP request nodes to publish to each social media platform individually. Form Customization: (Optional) Modify the form fields in the "Photo Upload Form" node to collect additional information or customize the user experience. Requirements Accounts:** n8n, FAL.AI, ImgBB, Google Drive, upload-post.com API Keys & Credentials:** FAL.AI API Key, ImgBB API Token, Google Drive OAuth2, Upload-post.com API Token & User ID File Types:** Supports JPG, PNG image formats for photo uploads Cost:** Approximately $0.29 USD per workflow execution for AI processing Transform your old photographs into viral social media content with this powerful AI-driven workflow that handles everything from restoration to distribution automatically. | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.googleDrive",
"type": "Google Drive",
"category": [
"Data & Storage"
]
}
],
"node_count": 2,
"node_types": [
"HTTP Request",
"Google Drive"
]
} | |
Create an n8n workflow for: Fetch Live ETF Metrics from JustETF to Excel with One-Click Updates
Description: Automate Your ETF Comparison: Real-Time Data & Analysis Automate ETF research in Excel with one click. This n8n workflow pulls live data from justetf.com using ISIN codes from your Excel table, extracts key metrics (dividends, fees, 5-year performance), and updates your “Div study” sheet instantly — all triggered by a button in Excel. Perfect for dividend investors, ETF screeners, or portfolio trackers who want fresh, accurate data without manual copy-paste. How it works Trigger: Click “Update Table” in Excel → calls n8n via webhook Excel: Logs current time (GMT-2) and reads all rows from “DivComp” table HTTP Request: Fetches ETF profile page from justetf.com using ISIN HTML Extraction: Parses page with CSS selectors to grab dividends, fees, 5Y performance Code Node: Cleans & structures data (e.g., last 5 years of dividends, yield, growth) Update Excel: Writes clean values back to your table (fees, yield, performance, name) Setup steps In Excel: Add a button → assign macro that calls your n8n webhook URL (path: /ETF) Ensure table “DivComp” has: ISIN, Dernière mise à jour, Frais, Performance depuis 5 ans, etc. In n8n: Connect Microsoft Excel (OneDrive) credential Update workbook/worksheet/table references if needed Test with 1–2 ISINs first Click “Update Table” → watch data refresh in real time! Tags: ETF, Excel, Web Scraping, Investing, Finance, Automation, justetf, Dividend Tracking | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.microsoftExcel",
"type": "Microsoft Excel 365",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.html",
"type": "HTML",
"category": [
"Core Nodes"
]
}
],
"node_count": 4,
"node_types": [
"HTTP Request",
"Microsoft Excel 365",
"Code",
"HTML"
]
} | |
Create an n8n workflow for: Transform Long Videos into Viral Shorts with AI and Schedule to Social Media using Whisper & Gemini
Description: This automation template turns any long video into multiple viral-ready short clips and auto-schedules them to TikTok, Instagram Reels, and YouTube Shorts. It works with both vertical and horizontal inputs and respects the original input resolution (no unnecessary upscaling), cropping or letterboxing intelligently when needed. The workflow automatically extracts between 3 and 6 clips (based on video length and the most engaging segments) and schedules one short per consecutive day—e.g., 3 clips → the next 3 days, 6 clips → the next 6 days. Note: This workflow uses OpenAI Whisper for word-level transcription, Google’s Gemini for clip selection and metadata, and Upload-Post’s FFmpeg API for GPU-accelerated cutting/cropping and social scheduling. You can use the same Upload-Post API token for both FFmpeg jobs and publishing uploads. Upload-Post also offers a generous free trial with no credit card required.* Who Is This For? Creators & Editors:** Batch-convert long talks/podcasts into daily Shorts/Reels/TikToks. Agencies & Social Teams:** Turn webinars/interviews into a reliable short-form stream. Brands & Founders:** Maintain a steady posting cadence with minimal hands-on editing. What Problem Does This Workflow Solve? Manual clipping is slow and inconsistent. This workflow: Finds Hooks Automatically:** AI picks 3–6 high-retention segments from transcript + timestamps (count scales with video length/quality). Cuts Cleanly:** Absolute-second FFmpeg timing to avoid mid-word cuts. Vertical & Horizontal Friendly:** Handles both orientations and respects source resolution. Schedules for You:** Posts one clip per day on consecutive days. How It Works Form Upload: Submit your long video. Audio Extraction: FFmpeg job extracts audio for accurate ASR. Whisper Transcription: Word-level timestamps enable precise clipping. AI Clip Mining (Gemini): Detects 3–6 “viral” moments (15–60s) and generates titles/descriptions. Cut & Crop (FFmpeg): GPU pipeline produces clean clips; preserves input resolution/orientation when possible and crops/pads appropriately for target platforms. Status & Download: Polls job status and retrieves the final clips. Auto-Scheduling (Consecutive Days): Schedules one short per day starting tomorrow, for as many days as clips were produced (e.g., 3 clips → 3 days, 6 clips → 6 days) at a configurable time (default 20:00 Europe/Madrid). Setup OpenAI (Whisper): Add your OpenAI API credentials. Google Gemini: Add Gemini credentials used by the AI Agent node. Upload-Post (free trial no credit card required): Generate your api token connect your social media accounts and add your API token credentials in n8n (same token works for FFmpeg jobs and publishing). Scheduling: Adjust posting time/intervals and timezone (Europe/Madrid by default). Metadata Mapping: Titles/descriptions are auto-generated per platform; tweak as needed. Requirements Accounts:** n8n, OpenAI, Google (Gemini), Upload-Post, and social platform connections. API Keys:** OpenAI token, Gemini credentials, Upload-Post token. Budget:** Whisper + Gemini inference + FFmpeg compute + optional posting costs. Features Word-Accurate Cuts:** Absolute-second timecodes with subtle pre/post-roll. Orientation-Aware:** Supports vertical and horizontal inputs; preserves source resolution where possible. Platform-Optimized Output:** 9:16-ready delivery with smart crop/pad behavior. Consecutive-Day Scheduler:** 3–6 clips → 3–6 consecutive posting days, automatically. Retry & Polling:** Built-in waits and status checks for robust processing. Modular:** Swap models, adjust clip count/length, or add/remove platforms quickly. Turn long-form video into a consistent sequence of Shorts/Reels/TikToks—automatically, day after day, while respecting your source resolution. | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 4,
"node_types": [
"HTTP Request",
"Code",
"AI Agent",
"Google Gemini Chat Model"
]
} | |
Create an n8n workflow for: 🤖 Create a Documentation Expert Bot with RAG, Gemini, and Supabase
Description: How it works This template is a complete, hands-on tutorial for building a RAG (Retrieval-Augmented Generation) pipeline. In simple terms, you'll teach an AI to become an expert on a specific topic—in this case, the official n8n documentation—and then build a chatbot to ask it questions. Think of it like this: instead of a general-knowledge AI, you're building an expert librarian. The workflow is split into two main parts: Part 1: Indexing the Knowledge (Building the Library) This is a one-time process you run manually. The workflow automatically scrapes all the pages of the n8n documentation, breaks them down into small, digestible chunks, and uses an AI model to create a special numerical representation (an "embedding") for each chunk. These embeddings are then stored in your own private knowledge base (a Supabase vector store). This is like a librarian reading every book and creating a hyper-detailed index card for every paragraph. Part 2: The AI Agent (The Expert Librarian) This is the chat interface. When you ask a question, the AI agent doesn't guess the answer. Instead, it uses your question to find the most relevant "index cards" (chunks) from the knowledge base it just built. It then feeds these specific, relevant chunks to a powerful language model (like Gemini) with a strict instruction: "Answer the user's question using ONLY this information." This ensures the answers are accurate, factual, and grounded in your provided documents. Set up steps Setup time: ~15-20 minutes This is an advanced workflow that requires setting up a free external database. Follow these steps carefully. Set up Supabase (Your Knowledge Base): You need a free Supabase account. Follow the detailed instructions in the large Workflow Setup sticky notes in the top-right of the workflow to: Create a new Supabase project. Run the provided SQL query in the SQL Editor to prepare your database. Get your Project URL and Service Role Key. Configure n8n Credentials: In your n8n instance, create a new Supabase credential using the Project URL and Service Role Key from the previous step. Create a new Google AI credential with your Gemini API key. Configure the Workflow Nodes: Select your new Supabase credential in the three Supabase nodes: Your Supabase Vector Store, Official n8n Documentation and Keep Supabase Instance Alive. Select your new Google AI credential in the three Gemini nodes: Gemini Chunk Embedding, Gemini Query Embedding and Gemini 2.5 Flash. Build the Knowledge Base: Find the Start Indexing manual trigger node at the top-left. Click its "Execute workflow" button to start the indexing process. This will take several minutes as it scrapes and processes the entire n8n documentation. You only need to do this once. Chat with Your Expert Agent: Once the indexing is complete, Activate the entire workflow. Open the RAG Chatbot chat trigger node and copy its Public URL. Open the URL in a new tab and start asking questions about n8n! For example: "How does the IF node work?" or "What is a sub-workflow?". | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.supabase",
"type": "Supabase",
"category": [
"Data & Storage"
]
},
{
"name": "n8n-nodes-base.html",
"type": "HTML",
"category": [
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
"type": "Recursive Character Text Splitter",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.vectorStoreSupabase",
"type": "Supabase Vector Store",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
"type": "Default Data Loader",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.embeddingsGoogleGemini",
"type": "Embeddings Google Gemini",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 10,
"node_types": [
"HTTP Request",
"Supabase",
"HTML",
"AI Agent",
"Simple Memory",
"Recursive Character Text Splitter",
"Supabase Vector Store",
"Default Data Loader",
"Embeddings Google Gemini",
"Google Gemini Chat Model"
]
} | |
Create an n8n workflow for: AI-Powered WhatsApp Chatbot for Text, Voice, Images, and PDF with RAG
Description: Who is this for? This template is designed for internal support teams, product specialists, and knowledge managers in technology companies who want to automate ingestion of product documentation and enable AI-driven, retrieval-augmented question answering via WhatsApp. What problem is this workflow solving? Support agents often spend too much time manually searching through lengthy documentation, leading to inconsistent or delayed answers. This solution automates importing, chunking, and indexing product manuals, then uses retrieval-augmented generation (RAG) to answer user queries accurately and quickly with AI via WhatsApp messaging. What these workflows do Workflow 1: Document Ingestion & Indexing Manually triggered to import product documentation from Google Docs. Automatically splits large documents into chunks for efficient searching. Generates vector embeddings for each chunk using OpenAI embeddings. Inserts the embedded chunks and metadata into a MongoDB Atlas vector store, enabling fast semantic search. Workflow 2: AI-Powered Query & Response via WhatsApp Listens for incoming WhatsApp user messages, supporting various types: Text messages: Plain text queries from users. Audio messages: Voice notes transcribed into text for processing. Image messages: Photos or screenshots analyzed to provide contextual answers. Document messages: PDFs, spreadsheets, or other files parsed for relevant content. Converts incoming queries to vector embeddings and performs similarity search on the MongoDB vector store. Uses OpenAI’s GPT-4o-mini model with retrieval-augmented generation to produce concise, context-aware answers. Maintains conversation context across multiple turns using a memory buffer node. Routes different message types to appropriate processing nodes to maximize answer quality. Setup Setting up vector embeddings Authenticate Google Docs and connect your Google Docs URL containing the product documentation you want to index. Authenticate MongoDB Atlas and connect the collection where you want to store the vector embeddings. Create a search index on this collection to support vector similarity queries. Ensure the index name matches the one configured in n8n (data_index). See the example MongoDB search index template below for reference. Setting up chat Authenticate the WhatsApp node with your Meta account credentials to enable message receiving and sending. Connect the MongoDB collection containing embedded product documentation to the MongoDB Vector Search node used for similarity queries. Set up the system prompt in the Knowledge Base Agent node to reflect your company’s tone, answering style, and any business rules, ensuring it references the connected MongoDB collection for context retrieval. Make sure Both MongoDB nodes (in ingestion and chat workflows) are connected to the same collection with: An embedding field storing vector data, Relevant metadata fields (e.g., document ID, source), and The same vector index name configured (e.g., data_index). Search Index Example: { "mappings": { "dynamic": false, "fields": { "_id": { "type": "string" }, "text": { "type": "string" }, "embedding": { "type": "knnVector", "dimensions": 1536, "similarity": "cosine" }, "source": { "type": "string" }, "doc_id": { "type": "string" } } } } | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.googleDocs",
"type": "Google Docs",
"category": [
"Miscellaneous"
]
},
{
"name": "n8n-nodes-base.whatsApp",
"type": "WhatsApp Business Cloud",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
"type": "Embeddings OpenAI",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
"type": "Recursive Character Text Splitter",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
"type": "Default Data Loader",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.vectorStoreMongoDBAtlas",
"type": "MongoDB Atlas Vector Store",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 12,
"node_types": [
"HTTP Request",
"Google Docs",
"WhatsApp Business Cloud",
"Code",
"AI Agent",
"Embeddings OpenAI",
"OpenAI Chat Model",
"Simple Memory",
"Recursive Character Text Splitter",
"Default Data Loader",
"OpenAI",
"MongoDB Atlas Vector Store"
]
} | |
Create an n8n workflow for: Automate AI Video Creation & Multi-Platform Publishing with GPT-4, Veo 3.1 & Blotato
Description: 💥 Automate AI Video Creation & Multi-Platform Publishing with Veo 3.1 & Blotato 🎯 Who is this for? This workflow is designed for content creators, marketers, and automation enthusiasts who want to produce professional AI-generated videos and publish them automatically on social media — without editing or manual uploads. Perfect for those using Veo 3.1, GPT-4, and Blotato to scale video creation. 💡 What problem is this workflow solving? Creating short-form content (TikTok, Instagram Reels, YouTube Shorts) is time-consuming — from writing scripts to video editing and posting. This workflow eliminates the manual steps by combining AI storytelling + video generation + automated publishing, letting you focus on creativity while your system handles production and distribution. ⚙️ What this workflow does Reads new ideas from Google Sheets Generates story scripts using GPT-4 Creates cinematic videos using Veo 3.1 (fal.ai/veo3.1/reference-to-video) with 3 input reference images Uploads the final video automatically to Google Drive Publishes the video across multiple platforms (TikTok, Instagram, Facebook, X, LinkedIn, YouTube) via Blotato Updates Google Sheets with video URL and status (Completed / Failed) 🧩 Setup Required accounts: OpenAI → GPT-4 API key fal.ai → Veo 3.1 API key Google Cloud Console → Sheets & Drive connection Blotato → API key for social media publishing Configuration steps: Copy the Google Sheets structure: A: id_video B: niche C: idea D: url_1 E: url_2 F: url_3 G: url_final H: status Add your API keys to the Workflow Configuration node. Insert three image URLs and a short idea into your sheet. Wait for the automation to process and generate your video. 🧠 How to customize this workflow Change duration or aspect ratio** → Edit the Veo 3.1 node JSON body (duration, aspect_ratio) Modify prompt style** → Adjust the “Optimize Prompt for Veo” node for your desired tone or cinematic look Add more platforms** → Extend Blotato integration to publish on Pinterest, Reddit, or Threads Enable Telegram Trigger** → Allow users to submit ideas and images directly via Telegram 🚀 Expected Outcome Within 2–3 minutes, your idea is transformed into a full cinematic AI video — complete with storytelling, visuals, and automatic posting to your social media channels. Save hours of editing and focus on strategy, creativity, and growth. 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.googleDrive",
"type": "Google Drive",
"category": [
"Data & Storage"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 5,
"node_types": [
"Google Sheets",
"HTTP Request",
"Google Drive",
"Code",
"OpenAI"
]
} | |
Create an n8n workflow for: Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram
Description: Generate AI videos with Seedance & Blotato, upload to TikTok, YouTube & Instagram Who is this for? This template is ideal for creators, content marketers, social media managers, and AI enthusiasts who want to automate the production of short-form, visually captivating videos for platforms like TikTok, YouTube Shorts, and Instagram Reels — all without manual editing or publishing. What problem is this workflow solving? Creating engaging videos requires: Generating creative ideas Writing detailed scene prompts Producing realistic video clips and sound effects Editing and stitching the final video Publishing across multiple platforms This workflow automates the entire process, saving hours of manual work and ensuring consistent, AI-driven content output ready for social distribution. What this workflow does This end-to-end AI video automation workflow: Generates a creative idea using OpenAI and LangChain Creates detailed video prompts with Seedance AI Generates video clips via Wavespeed AI Generates sound effects with Fal AI Stitches the final video using Fal AI’s ffmpeg API Logs metadata and video links to Google Sheets Uploads the video to Blotato Auto-publishes to TikTok, YouTube, Instagram, and other platforms Setup Add your OpenAI API key in the LLM nodes Set up Seedance and Wavespeed AI credentials for video prompt and clip generation Add your Fal AI API key for sound and stitching steps Connect your Google Sheets account for tracking ideas and outputs Set your Blotato API key and fill in the platform account IDs in the Assign Social Media IDs node Adjust the Schedule Trigger to control when the automation runs How to customize this workflow to your needs Change the AI prompts** to target your niche (e.g., ASMR, product videos, humor) Add a Telegram or Slack step** for video preview before publishing Tweak scene structure** or video duration to match your style Disable platforms** you don’t want by turning off specific HTTP Request nodes Edit the sound generation prompts** for different moods or effects 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.toolThink",
"type": "Think Tool",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 7,
"node_types": [
"Google Sheets",
"HTTP Request",
"Code",
"AI Agent",
"OpenAI Chat Model",
"Structured Output Parser",
"Think Tool"
]
} | |
Create an n8n workflow for: 🤖 Build a Documentation Expert Chatbot with Gemini RAG Pipeline
Description: How it works This template is a complete, hands-on tutorial for building a RAG (Retrieval-Augmented Generation) pipeline. In simple terms, you'll teach an AI to become an expert on a specific topic—in this case, the official n8n documentation—and then build a chatbot to ask it questions. Think of it like this: instead of a general-knowledge AI, you're building an expert librarian. The workflow is split into two main parts: Part 1: Indexing the Knowledge (Building the Library) This is a one-time process you run manually. The workflow automatically scrapes all pages of the n8n documentation, breaks them down into small, digestible chunks, and uses an AI model to create a special numerical representation (an "embedding") for each chunk. These embeddings are then stored in n8n's built-in Simple Vector Store. This is like a librarian reading every book and creating a hyper-detailed index card for every paragraph. Important: This in-memory knowledge base is temporary. It will be erased if you restart your n8n instance, and you will need to run the indexing process again. Part 2: The AI Agent (The Expert Librarian) This is the chat interface. When you ask a question, the AI agent doesn't guess the answer. Instead, it uses your question to find the most relevant "index cards" (chunks) from the knowledge base it just built. It then feeds these specific, relevant chunks to a powerful language model (Gemini) with a strict instruction: "Answer the user's question using ONLY this information." This ensures the answers are accurate, factual, and grounded in your provided documents. Set up steps Setup time: 2 minutes (plus 15-20 minutes for indexing) This template uses n8n's built-in tools, removing the need for an external database. Follow these simple steps to get started. Configure Google AI Credentials: You will need a Google AI API key for the Gemini models. In your n8n workflow, go to any of the three Gemini nodes (e.g., Gemini 2.5 Flash). Click the Credential dropdown and select + Create New Credential. Enter your Gemini API key and save. Apply Credentials to All Nodes: Your new Google AI credential is now saved. Go to the other two Gemini nodes (Gemini Chunk Embedding and Gemini Query Embedding) and select your newly created credential from the dropdown list. Build the Knowledge Base: Find the Start Indexing manual trigger node at the top-left of the workflow. Click its "Execute workflow" button to start the indexing process. ⚠️ Be Patient: This will take 15-20 minutes as it scrapes and processes the entire n8n documentation. You only need to do this once per n8n session. If you restart n8n, you must run this step again. Chat with Your Expert Agent: Once the indexing is complete, Activate the entire workflow using the toggle at the top of the screen. Open the RAG Chatbot chat trigger node (bottom-left) and copy its Public URL. Open the URL in a new tab and start asking questions about n8n! For example: "How does the IF node work?" or "What is a sub-workflow?". | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.html",
"type": "HTML",
"category": [
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
"type": "Recursive Character Text Splitter",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.vectorStoreInMemory",
"type": "Simple Vector Store",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
"type": "Default Data Loader",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.embeddingsGoogleGemini",
"type": "Embeddings Google Gemini",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 9,
"node_types": [
"HTTP Request",
"HTML",
"AI Agent",
"Simple Memory",
"Recursive Character Text Splitter",
"Simple Vector Store",
"Default Data Loader",
"Embeddings Google Gemini",
"Google Gemini Chat Model"
]
} | |
Create an n8n workflow for: Course Recommendation System for Surveys with Data Tables and GPT-4.1-Mini
Description: Use the n8n Data Tables feature to store, retrieve, and analyze survey results — then let OpenAI automatically recommend the most relevant course for each respondent. 🧠 What this workflow does This workflow demonstrates how to use n8n’s built-in Data Tables to create an internal recommendation system powered by AI. It: Collects survey responses through a Form Trigger Saves responses to a Data Table called Survey Responses Fetches a list of available courses from another Data Table called Courses Passes both Data Tables into an OpenAI Chat Agent, which selects the most relevant course Returns a structured recommendation with: course: the course title reasoning: why it was selected > Trigger: Form submission (manual or public link) 👥 Who it’s for Perfect for educators, training managers, or anyone wanting to use n8n Data Tables as a lightweight internal database — ideal for AI-driven recommendations, onboarding workflows, or content personalization. ⚙️ How to set it up 1️⃣ Create your n8n Data Tables This workflow uses two Data Tables — both created directly inside n8n. 🧾 Table 1: Survey Responses Columns: Name Q1 — Where did you learn about n8n? Q2 — What is your experience with n8n? Q3 — What kind of automations do you need help with? To create: Add a Data Table node to your workflow. From the list, click “Create New Data Table.” Name it Survey Responses and add the columns above. 📚 Table 2: Courses Columns: Course Description To create: Add another Data Table node. Click “Create New Data Table.” Name it Courses and create the columns above. Copy course data from this Google Sheet: 👉 This Courses Data Table is where you’ll store all available learning paths or programs for the AI to compare against survey inputs. 2️⃣ Connect OpenAI Go to OpenAI Platform Create an API key In n8n, open Credentials → OpenAI API and paste your key The workflow uses the gpt-4.1-mini model via the LangChain integration 🧩 Key Nodes Used | Node | Purpose | n8n Feature | |------|----------|-------------| | Form Trigger | Collect survey responses | Forms | | Data Table (Upsert) | Stores results in Survey Responses | Data Tables | | Data Table (Get) | Retrieves Courses | Data Tables | | Aggregate + Set | Combines and formats table data | Core nodes | | OpenAI Chat Model (LangChain Agent) | Analyzes responses and courses | AI | | Structured Output Parser | Returns structured JSON output | LangChain | 💡 Tips for customization Add more Data Table columns (e.g., email, department, experience years) Use another Data Table to store AI recommendations or performance results Modify the Agent system message to customize how AI chooses courses Send recommendations via Email, Slack, or Google Sheets 🧾 Why Data Tables? This workflow shows how n8n’s Data Tables can act as your internal database: Create and manage tables directly inside n8n No external integrations needed Store structured data for AI prompts Share tables across multiple workflows All user data and course content are stored securely and natively in n8n Cloud or Self-Hosted environments. 📬 Contact Need help customizing this (e.g., expanding Data Tables, connecting multiple surveys, or automating follow-ups)? 📧 🔗 Robert Breen 🌐 ynteractive.com | {
"nodes": [
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 3,
"node_types": [
"AI Agent",
"OpenAI Chat Model",
"Structured Output Parser"
]
} | |
Create an n8n workflow for: 🎓 Learn JSON Basics with an Interactive Step-by-Step Tutorial for Beginners
Description: How it works This workflow is an interactive, hands-on tutorial designed to teach you the absolute basics of JSON (JavaScript Object Notation) and, more importantly, how to use it within n8n. It's perfect for beginners who are new to automation and data structures. The tutorial is structured as a series of simple steps. Each node introduces a new, fundamental concept of JSON: Key/Value Pairs: The basic building block of all JSON. Data Types: It then walks you through the most common data types one by one: String (text) Number (integers and decimals) Boolean (true or false) Null (representing "nothing") Array (an ordered list of items) Object (a collection of key/value pairs) Using JSON with Expressions: The most important step! It shows you how to dynamically pull data from a previous node into a new one using n8n's expressions ({{ }}). Final Exam: A final node puts everything together, building a complete JSON object by referencing data from all the previous steps. Each node has a detailed sticky note explaining the concept in simple terms. Set up steps Setup time: 0 minutes! This is a tutorial workflow, so there is no setup required. Simply click the "Execute Workflow" button to run it. Follow the instructions in the main sticky note: click on each node in order, from top to bottom. For each node, observe the output in the right-hand panel and read the sticky note next to it to understand what you're seeing. By the end, you'll have a solid understanding of what JSON is and how to work with it in your own n8n workflows. | {
"nodes": [],
"node_count": 0,
"node_types": []
} | |
Create an n8n workflow for: Automate & Publish Video Ad Campaigns with NanoBanana, Seedream, GPT-4o, Veo 3
Description: 💥 Automate video ads with NanoBanana, Seedream 4, ChatGPT Image and Veo 3 Who is this for? This template is designed for marketers, content creators, and e-commerce brands who want to automate the creation of professional ad videos at scale. It’s ideal for teams looking to generate consistent, high-quality video ads for social media without spending hours on manual design, editing, or publishing. What problem is this workflow solving? / Use case Creating video ads usually requires multiple tools and a lot of time: writing scripts, designing product visuals, editing videos, and publishing them across platforms. This workflow automates the entire pipeline — from idea to ready-to-publish ad video — ensuring brands can quickly test campaigns and boost engagement without production delays. What this workflow does Generates ad ideas from Telegram input (text + product image). Creates product visuals using multiple AI image engines: 🌊 Seedream 4.0 (realistic visuals) 🍌 NanoBanana (image editing & enhancement) 🤖 ChatGPT Image / GPT-4o (creative variations) Produces cinematic video ads with Veo 3 based on AI-generated scripts. Merges multiple short clips into a polished final ad. Publishes automatically to multiple platforms (TikTok, Instagram, LinkedIn, X, Threads, Facebook, Pinterest, Bluesky, YouTube) via Blotato. Stores metadata and results in Google Sheets & Google Drive for easy tracking. Notifies you via Telegram with the video link and copy. Setup Connect your accounts in n8n: Telegram API (for input and notifications) Google Drive + Google Sheets (storage & tracking) Kie AI API (Seedream + Veo 3) Fal.ai API (NanoBanana + video merging) OpenAI (for script and prompt generation) Blotato API (for social publishing) Prepare a Google Sheet with brand info and settings (product name, category, features, offer, website URL). Deploy the workflow and connect your Telegram bot to start sending ad ideas (photo + caption). Run the workflow — it will automatically generate images, create videos, and publish to your chosen channels. How to customize this workflow to your needs Brand customization**: Adjust the Google Sheet values to reflect your brand’s offers and product features. Platforms**: Enable/disable specific Blotato nodes depending on which platforms you want to publish to. Video style**: Edit the AI agent’s system prompt to control tone, format, and transitions (cinematic, playful, modern, etc.). Notifications**: Adapt Telegram nodes to send updates to different team members or channels. Storage**: Change the Google Drive folder IDs to store generated videos and images in your preferred location. This workflow lets you go from idea → images → cinematic ad video → auto-published content in minutes, fully automated. 📄 🎥 Watch This Tutorial: Step by Step 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.telegram",
"type": "Telegram",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.googleDrive",
"type": "Google Drive",
"category": [
"Data & Storage"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.toolThink",
"type": "Think Tool",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 10,
"node_types": [
"Google Sheets",
"HTTP Request",
"Telegram",
"Google Drive",
"Code",
"AI Agent",
"OpenAI Chat Model",
"Structured Output Parser",
"OpenAI",
"Think Tool"
]
} | |
Create an n8n workflow for: Email Support Agent w/ Gemini & GPT fallback using Gmail + Google Sheets
Description: 📧 Master Your First AI Email Agent with Smart Fallback! Welcome to your hands-on guide for building a resilient, intelligent email support system in n8n! This workflow is specifically designed as an educational tool to help you understand advanced AI automation concepts in a practical, easy-to-follow way. 🚀 What You'll Learn & Build: This powerful template enables you to create an automated email support agent that: Monitors Gmail** for new customer inquiries in real-time. Processes requests** using a primary AI model (Google Gemini) for efficiency. Intelligently falls back to a secondary AI model** (OpenAI GPT) if the primary model fails or for more complex queries, ensuring robust reliability. Generates personalized and helpful replies** automatically. Logs every interaction** meticulously to a Google Sheet for easy tracking and analysis. 💡 Why a Fallback Model is Game-Changing (and Why You Should Learn It): Unmatched Reliability (99.9% Uptime):** If one AI service experiences an outage or rate limits, your automation seamlessly switches to another, ensuring no customer email goes unanswered. Cost Optimization:** Leverage more affordable models (like Gemini) for standard queries, reserving premium models (like GPT) only when truly needed, significantly reducing your API costs. Superior Quality Assurance:** Get the best of both worlds – the speed of cost-effective models combined with the accuracy of more powerful ones for complex scenarios. Real-World Application:** This isn't just theory; it's a critical pattern for building resilient, production-ready AI systems. 🎓 Perfect for Beginners & Aspiring Automators: Simple Setup:** With drag-and-drop design and pre-built integrations, you can get this workflow running with minimal configuration. Just add your API keys! Clear Educational Value:** Learn core concepts like AI model orchestration strategies, customer service automation best practices, and multi-model AI implementation patterns. Immediate Results:** See your AI agent in action, responding to emails and logging data within minutes of setup. 🛠️ Getting Started Checklist: To use this workflow, you'll need: A Gmail account with API access enabled. A Google Sheets document created for logging. A Gemini API key (your primary AI model). An OpenAI API key (your fallback AI model). An n8n instance (cloud or desktop). Embark on your journey to building intelligent, resilient automation systems today! | {
"nodes": [
{
"name": "n8n-nodes-base.gmail",
"type": "Gmail",
"category": [
"Communication",
"HITL"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 5,
"node_types": [
"Gmail",
"AI Agent",
"OpenAI Chat Model",
"Simple Memory",
"Google Gemini Chat Model"
]
} | |
Create an n8n workflow for: Automated LinkedIn Content Creation with GPT-4 and DALL-E for Scheduled Posts
Description: How it works Automatically generates trending LinkedIn content topics using AI Researches current industry angles and hooks Writes posts in your authentic voice using OpenAI Creates professional images with DALL-E Posts everything on schedule without manual intervention Set up steps Connect OpenAI API for content generation and image creation Link LinkedIn API for automated posting Configure scheduling triggers (daily/weekly posting) Customize prompts to match your writing style and industry Set up content approval workflows (optional) Results you can expect 400% increase in profile views within 3 weeks Generate 120+ posts per month vs manual 12 posts Free up 15+ hours weekly for revenue-generating activities Consistent posting schedule that builds audience engagement Professional content that converts followers to clients Time to set up: 30-45 minutes Technical level: Beginner to intermediate APIs required: OpenAI, LinkedIn API Cost: OpenAI usage fees only (approximately $5-15/month) This workflow transforms LinkedIn content creation from a time-consuming daily task into a fully automated system that works while you sleep. Perfect for entrepreneurs, marketers, and content creators who want consistent LinkedIn presence without the manual effort. | {
"nodes": [
{
"name": "n8n-nodes-base.linkedIn",
"type": "LinkedIn",
"category": [
"Communication",
"Marketing"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.chainLlm",
"type": "Basic LLM Chain",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 6,
"node_types": [
"LinkedIn",
"AI Agent",
"Basic LLM Chain",
"OpenAI Chat Model",
"Structured Output Parser",
"OpenAI"
]
} | |
Create an n8n workflow for: Automated Job Applications & Status Tracking with LinkedIn, Indeed & Google Sheets
Description: Apply to jobs automatically from Google Sheets with status tracking Who's it for Job seekers who want to streamline their application process, save time on repetitive tasks, and never miss following up on applications. Perfect for anyone managing multiple job applications across different platforms. What it does This workflow automatically applies to jobs from a Google Sheet, tracks application status, and keeps you updated with notifications. It handles the entire application lifecycle from submission to status monitoring. Key features: Reads job listings from Google Sheets with filtering by priority and status Automatically applies to jobs on LinkedIn, Indeed, and other platforms Updates application status in real-time Checks application status every 2 days and notifies you of changes Sends email notifications for successful applications and status updates Prevents duplicate applications and manages rate limiting How it works The workflow runs on two main schedules: Daily Application Process (9 AM, weekdays): Reads your job list from Google Sheets Filters for jobs marked as "Not Applied" with Medium/High priority Processes each job individually to prevent rate limiting Applies to jobs using platform-specific APIs (LinkedIn, Indeed, etc.) Updates the sheet with application status and reference ID Sends confirmation email for each application Status Monitoring (Every 2 days at 10 AM): Checks all jobs with "Applied" status Queries job platforms for application status updates Updates the sheet if status has changed Sends notification emails for status changes (interviews, rejections, etc.) Requirements Google account with Google Sheets access Gmail account for notifications Resume stored online (Google Drive, Dropbox, etc.) API access to job platforms (LinkedIn, Indeed) - optional for basic version n8n instance (self-hosted or cloud) How to set up Step 1: Create Your Job Tracking Sheet Create a Google Sheet with these exact column headers: | Job_ID | Company | Position | Status | Applied_Date | Last_Checked | Application_ID | Notes | Job_URL | Priority | |--------|---------|----------|--------|--------------|--------------|----------------|-------|---------|----------| | JOB001 | Google | Software Engineer | Not Applied | | | | | | High | | JOB002 | Microsoft | Product Manager | Not Applied | | | | | | Medium | Column explanations: Job_ID**: Unique identifier (JOB001, JOB002, etc.) Company**: Company name Position**: Job title Status**: Not Applied, Applied, Under Review, Interview Scheduled, Rejected, Offer Applied_Date**: Auto-filled when application is submitted Last_Checked**: Auto-updated during status checks Application_ID**: Platform reference ID (auto-generated) Notes**: Additional information or application notes Job_URL**: Direct link to job posting Priority**: High, Medium, Low (Low priority jobs are skipped) Step 2: Configure Google Sheets Access In n8n, go to Credentials → Add Credential Select Google Sheets OAuth2 API Follow the OAuth setup process to authorize n8n Test the connection with your job tracking sheet Step 3: Set Up Gmail Notifications Add another credential for Gmail OAuth2 API Authorize n8n to send emails from your Gmail account Test by sending a sample email Step 4: Update Workflow Configuration In the "Set Configuration" node, update these values: spreadsheetId**: Your Google Sheet ID (found in the URL) resumeUrl**: Direct link to your resume (make sure it's publicly accessible) yourEmail**: Your email address for notifications coverLetterTemplate**: Customize your cover letter template Step 5: Customize Application Logic For basic version (no API access): The workflow includes placeholder HTTP requests that you can replace with actual job platform integrations. For advanced version (with API access): Replace LinkedIn/Indeed HTTP nodes with actual API calls Add your API credentials to n8n's credential store Update the platform detection logic for additional job boards Step 6: Test and Activate Add 1-2 test jobs to your sheet with "Not Applied" status Run the workflow manually to test Check that the sheet gets updated and you receive notifications Activate the workflow to run automatically How to customize the workflow Adding New Job Platforms Update Platform Detection: Modify the "Check Platform Type" node to recognize new job board URLs Add New Application Node: Create HTTP request nodes for new platforms Update Status Checking: Add status check logic for the new platform Customizing Application Strategy Rate Limiting**: Add "Wait" nodes between applications (recommended: 5-10 minutes) Application Timing**: Modify the cron schedule to apply during optimal hours Priority Filtering**: Adjust the filter conditions to match your criteria Multiple Resumes**: Use conditional logic to select different resumes based on job type Enhanced Notifications Slack Integration**: Replace Gmail nodes with Slack for team notifications Discord Webhooks**: Send updates to Discord channels SMS Notifications**: Use Twilio for urgent status updates Dashboard Updates**: Connect to Notion, Airtable, or other productivity tools Advanced Features AI-Powered Personalization**: Use OpenAI to generate custom cover letters Job Scoring**: Implement scoring logic based on job requirements vs. your skills Interview Scheduling**: Auto-schedule interviews when status changes Follow-up Automation**: Send follow-up emails after specific time periods Important Notes Platform Compliance Always respect rate limits to avoid being blocked Follow each platform's Terms of Service Use official APIs when available instead of web scraping Don't spam job boards with excessive applications Data Privacy Store credentials securely using n8n's credential store Don't hardcode API keys or personal information in nodes Regularly review and clean up old application data Ensure your resume link is secure but accessible Quality Control Start with a small number of jobs to test the workflow Review application success rates and adjust strategy Monitor for errors and set up proper error handling Keep your job list updated and remove expired postings This workflow transforms job searching from a manual, time-consuming process into an automated system that maximizes your application efficiency while maintaining quality and compliance. | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.gmail",
"type": "Gmail",
"category": [
"Communication",
"HITL"
]
}
],
"node_count": 3,
"node_types": [
"Google Sheets",
"HTTP Request",
"Gmail"
]
} | |
Create an n8n workflow for: Automate Email Filtering & AI Summarization. 100% free & effective, works 7/24
Description: Good to know: This workflow automatically processes incoming emails (you can filter them base on your needs) and creates concise AI-powered summaries, then logs them to a Google Sheets spreadsheet for easy tracking and analysis. Who is this for? ➖Business professionals who receive many emails and need quick summaries ➖Customer service teams tracking email communications ➖Project managers monitoring email correspondence ➖Anyone who wants to automatically organize and summarize their email communications What problem is this workflow solving? This workflow solves the problem of email overload by automatically reading incoming emails, generating concise summaries using AI, and organizing them in a structured format. It eliminates the need to manually read through every email to understand the key points and maintains a searchable record of communications. What this workflow does: ✅Monitors your Gmail inbox for new emails ✅Filters emails based on specific criteria (sender validation) ✅Extracts key information (sender, date, subject, content) ✅Uses AI to generate concise summaries of email content ✅Automatically logs all data including the AI summary to a Google Sheets spreadsheet How it works: 1️⃣Gmail trigger monitors for new emails at specified intervals 2️⃣Email data is processed and formatted using JavaScript 3️⃣A conditional check validates the sender 4️⃣AI agent (powered by Groq's language model) reads the email content and generates a summary 5️⃣All information is automatically appended to a Google Sheets document How to use: Set up Gmail OAuth2 credentials in n8n Configure Google Sheets OAuth2 credentials Set up Groq API credentials for AI processing Create a Google Sheets document and update the document ID Customize the sender validation criteria as needed Activate the workflow Requirements: ✅n8n instance (cloud or self-hosted) ✅Gmail account with OAuth2 access ✅Google Sheets account ✅AI API ✅Basic understanding of n8n workflow Customizing this workflow: 🟢Modify the Gmail trigger filters to target specific labels or criteria 🟢Adjust the sender validation logic in the conditional node 🟢Customize the AI prompt to change summary style or focus 🟢Add additional data fields to the Google Sheets output 🟢Change the polling frequency for checking new emails 🟢Switch to different AI models by replacing the Groq node | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGroq",
"type": "Groq Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 4,
"node_types": [
"Google Sheets",
"Code",
"AI Agent",
"Groq Chat Model"
]
} | |
Create an n8n workflow for: 🤖 Create Your First AI Agent with Weather & Web Scraping (Starter Kit)
Description: This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This template is your personal launchpad into the world of AI-powered automation. It provides a fully functional, interactive AI chatbot that you can set up in minutes, designed specifically for those new to AI Agents. What is an AI Agent? Think of it as a smart assistant that doesn't just talk—it acts. You give it a set of "tools" (like other n8n tool nodes), and it intelligently decides which tool to use to answer your questions or complete your tasks. This starter kit comes with a pre-built "toolbox" of superpowers, allowing your agent to: Get the Weather:** Ask for the forecast anywhere in the world. Get the News:** Fetch the latest headlines from n8n, CNN, and others. The workflow is designed to be a hands-on learning experience, with detailed sticky notes explaining every component, from the chat interface to the agent's "brain" and "memory." Set up steps Setup time: ~2-3 minutes This workflow is designed to be incredibly easy to start. You only need one free API key to get it working. Add Your AI Key: The workflow uses Google's Gemini model by default. You will need a free Gemini API key. Find the Gemini node on the canvas. The sticky note right below it (How to Get Google Gemini Credentials) provides a link and simple instructions to get your key. In the Gemini node, click the Credential dropdown and select + Create New Credential to add your key. Activate the Workflow: At the top-right of the screen, click the "Inactive" toggle switch. It will turn green and say "Active". Your agent is now live! Start Chatting: Open the Example Chat Window node (it has a 💬 icon). In its parameter panel, you will see a Chat URL. Click the link to copy it. Paste the URL into a new browser tab and start asking your agent questions! Optional: The template also includes disabled OpenAI chat model node and tools for Google Calendar, and Gmail. You can enable and configure these later to change the underlying AI model or give your agent even more superpowers! | {
"nodes": [
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 4,
"node_types": [
"AI Agent",
"OpenAI Chat Model",
"Simple Memory",
"Google Gemini Chat Model"
]
} | |
Create an n8n workflow for: Auto-Generate SEO Blog Posts with Perplexity, GPT, Leonardo & WordPress
Description: ✨ SEO Blog Post Automation with Perplexity, GPT, Leonardo AI & WordPress This workflow automates the creation and publishing of weekly SEO-optimized blog posts using AI and publishes them directly to WordPress — with featured images and tracking in Google Sheets. 🧠 Who is this for This automation is ideal for: Startup platforms and tech blogs Content creators and marketers Solopreneurs who want consistent blog output Spanish-speaking audiences focused on startup trends ⚙️ What it does ⏰ Runs every Monday at 6:00 AM via CRON 📡 Uses Perplexity AI to research trending startup topics 📝 Generates a 1000–1500 word article with GPT in structured HTML 🎨 Creates a cinematic blog image using Leonardo AI 🖼️ Uploads the image to WordPress with alt text and SEO-friendly filename 📰 Publishes the post in a pre-defined category 📊 Logs the post in Google Sheets for tracking 🚀 How to set it up Connect your credentials: Perplexity API OpenAI (GPT-4.1 Mini or similar) Leonardo AI (Bearer token) WordPress (Basic Auth) Google Sheets (OAuth2) Customize your content: Adjust the prompt inside the HTTP node to fit your tone or focus Change the WordPress category ID Update scheduling if you want a different publishing day Test the workflow manually to ensure all steps function correctly 💡 Pro tips Add Slack or email nodes to get notified when a post goes live Use multiple categories or RSS feeds for content diversification Adjust GPT prompt to support different languages or tones Add post-validation rules if needed before publishing 🎯 Why this matters This workflow gives you a full editorial process on autopilot: research, writing, design, publishing, and tracking — all powered by AI. No more blank pages or manual posting. Use it to scale your content strategy, boost your SEO, and stay relevant — 100% hands-free. | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 4,
"node_types": [
"Google Sheets",
"HTTP Request",
"Code",
"OpenAI"
]
} | |
Create an n8n workflow for: Automated Law Firm Lead Management & Scheduling with AI, Jotform & Calendar
Description: Youtube Explanation: [ ) This n8n workflow is designed to automate the initial intake and scheduling for a law firm. It's split into two main parts: New Inquiry Handling: Kicks off when a potential client fills out a JotForm, saves their data, and sends them an initial welcome message on WhatsApp. Appointment Scheduling: Activates when the client replies on WhatsApp, allowing an AI agent to chat with them to schedule a consultation. Here’s a detailed breakdown of the prerequisites and each node. Prerequisites Before building this workflow, you'll need accounts and some setup for each of the following services: JotForm JotForm Account**: You need an active JotForm account. A Published Form**: Create a form with the exact fields used in the workflow: Full Name, Email Address, Phone Number, I am a..., Legal Service of Interest, Brief Message, and How Did You Hear About Us?. API Credentials**: Generate API keys from your JotForm account settings to connect it with n8n. Google Google Account**: To use Google Sheets and Google Calendar. Google Sheet**: Create a new sheet named "Law Client Enquiries". The first row must have these exact headers: Full Name, Email Address, Phone Number, client type, Legal Service of Interest, Brief Message, How Did You Hear About Us?. Google Calendar**: An active calendar to manage appointments. Google Cloud Project**: Service Account Credentials (for Sheets): In the Google Cloud Console, create a service account, generate JSON key credentials, and enable the Google Sheets API. You must then share your Google Sheet with the service account's email address (e.g., OAuth Credentials (for Calendar): Create OAuth 2.0 Client ID credentials to allow n8n to access your calendar on your behalf. You'll need to enable the Google Calendar API. Gemini API Key: Enable the Vertex AI API in your Google Cloud project and generate an API key to use the Google Gemini models. WhatsApp Meta Business Account**: Required to use the WhatsApp Business Platform. WhatsApp Business Platform Account: You need to set up a business account and connect a phone number to it. This is **different from the regular WhatsApp or WhatsApp Business app. API Credentials**: Get the necessary access tokens and IDs from your Meta for Developers dashboard to connect your business number to n8n. PostgreSQL Database A running PostgreSQL instance**: This can be hosted anywhere (e.g., AWS, DigitalOcean, Supabase). The AI agent needs it to store and retrieve conversation history. Database Credentials**: You'll need the host, port, user, password, and database name to connect n8n to it. Node-by-Node Explanation The workflow is divided into two distinct logical flows. Flow 1: New Client Intake from JotForm This part triggers when a new client submits your form. JotForm Trigger What it does: This is the starting point. It automatically runs the workflow whenever a new submission is received for the specified JotForm (Form ID: 252801824783057). Prerequisites: A JotForm account and a created form. Append or update row in sheet (Google Sheets) What it does: It takes the data from the JotForm submission and adds it to your "Law Client Enquiries" Google Sheet. How it works: It uses the appendOrUpdate operation. It tries to find a row where the "Email Address" column matches the email from the form. If it finds a match, it updates that row; otherwise, it appends a new row at the bottom. Prerequisites: A Google Sheet with the correct headers, shared with your service account. AI Agent What it does: This node crafts the initial welcome message to be sent to the client. How it works: It uses a detailed prompt that defines a persona ("Alex," a legal intake assistant) and instructs the AI to generate a professional WhatsApp message. It dynamically inserts the client's name and service of interest from the Google Sheet data into the prompt. Connected Node: It's powered by the Google Gemini Chat Model. Send message (WhatsApp) What it does: It sends the message generated by the AI Agent to the client. How it works: It takes the client's phone number from the data (Phone Number column) and the AI-generated text (output from the AI Agent node) to send the message via the WhatsApp Business API. Prerequisites: A configured WhatsApp Business Platform account. Flow 2: AI-Powered Scheduling via WhatsApp This part triggers when the client replies to the initial message. WhatsApp Trigger What it does: This node listens for incoming messages on your business's WhatsApp number. When a client replies, it starts this part of the workflow. Prerequisites: A configured WhatsApp Business Platform account. If node What it does: It acts as a simple filter. It checks if the incoming message text is empty. If it is (e.g., a status update), the workflow stops. If it contains text, it proceeds to the AI agent. AI Agent1 What it does: This is the main conversational brain for scheduling. It handles the back-and-forth chat with the client. How it works: Its prompt is highly detailed, instructing it to act as "Alex" and follow a strict procedure for scheduling. It has access to several "tools" to perform actions. Connected Nodes: Google Gemini Chat Model1: The language model that does the thinking. Postgres Chat Memory: Remembers the conversation history with a specific user (keyed by their WhatsApp ID), so the user doesn't have to repeat themselves. Tools: Know about the user enquiry, GET MANY EVENTS..., and Create an event. AI Agent Tools (What the AI can *do*) Know about the user enquiry (Google Sheets Tool): When the AI needs to know who it's talking to, it uses this tool. It takes the user's phone number and looks up their original enquiry details in the "Law Client Enquiries" sheet. GET MANY EVENTS... (Google Calendar Tool): When a client suggests a date, the AI uses this tool to check your Google Calendar for any existing events on that day to see if you're free. Create an event (Google Calendar Tool): Once a time is agreed upon, the AI uses this tool to create the event in your Google Calendar, adding the client as an attendee. Send message1 (WhatsApp) What it does: Sends the AI's response back to the client. This could be a confirmation that the meeting is booked, a question asking for their email, or a suggestion for a different time if the requested slot is busy. How it works: It sends the output text from AI Agent1 to the client's WhatsApp ID, continuing the conversation. | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.whatsApp",
"type": "WhatsApp Business Cloud",
"category": [
"Communication",
"HITL"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryPostgresChat",
"type": "Postgres Chat Memory",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 5,
"node_types": [
"Google Sheets",
"WhatsApp Business Cloud",
"AI Agent",
"Google Gemini Chat Model",
"Postgres Chat Memory"
]
} | |
Create an n8n workflow for: Create Viral Ads with AI: NanoBanana & publish on socials via upload-post
Description: 💥 Create viral Ads with NanoBanana & Seedance, publish on socials via upload-post Who is this for? This workflow is designed for marketers, content creators, and small businesses who want to automate the creation of engaging social media ads without spending hours on manual design, video editing, or publishing. What problem is this workflow solving? / Use case Manually creating ads for multiple platforms is time-consuming and repetitive. You need to generate visuals, edit videos, add music, and then publish them across social channels. This workflow automates the end-to-end ad production pipeline, saving time while ensuring consistent, professional-quality output. What this workflow does Receives ad ideas via Telegram. Uses NanoBanana to generate and edit realistic product images. Transforms images into engaging short videos with Seedance. Generates background music with Suno. Merges video and audio into a polished final ad. Reads brand info and generates ad copy with AI (OpenAI). Publishes ads to Instagram, TikTok, YouTube, Facebook, and X via upload-post. Stores media and campaign data in Google Drive and Google Sheets for tracking. Sends back notifications and previews via Telegram. Setup Connect your accounts: Telegram Google Drive Google Sheets OpenAI API NanoBanana API Seedance API Suno API Upload-post Prepare Google Sheets: Add a sheet for brand details (name, category, features, website). Add another sheet for video logs (status, links, captions). Configure upload-post: Ensure your social accounts (TikTok, Instagram, YouTube, Facebook, X) are linked to upload-post. How to customize this workflow to your needs Prompts* → Adjust the *image/video/music prompts** to better reflect your brand’s tone and products. Ad copy* → Modify the AI prompt inside the *Ads Copywriter Generator** to control wording, style, and structure. Publishing scope* → Choose only the platforms you want (TikTok, Instagram, etc.) inside the *upload-post** node. Storage** → Update Google Drive folder IDs and Google Sheets document IDs to match your own workspace. 👉 With this template, you get a fully automated viral ad production system powered by AI visuals, video rendering, and auto-publishing across social platforms. Perfect for scaling your content strategy while saving time. 📄 Documentation: Notion Guide Demo Video 🎥 Watch the full tutorial here: YouTube Demo Need help customizing? Contact me for consulting and support : Linkedin / Youtube | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.telegram",
"type": "Telegram",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.googleDrive",
"type": "Google Drive",
"category": [
"Data & Storage"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.toolThink",
"type": "Think Tool",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 10,
"node_types": [
"Google Sheets",
"HTTP Request",
"Telegram",
"Google Drive",
"Code",
"AI Agent",
"OpenAI Chat Model",
"Structured Output Parser",
"OpenAI",
"Think Tool"
]
} | |
Create an n8n workflow for: Get Started with Google Sheets in n8n
Description: A hands-on starter workflow that teaches beginners how to: Pull rows from a Google Sheet Append a new record that mimics a form submission Generate AI-powered text with GPT-4o based on a “Topic” column Write the AI output back into the correct row using an update operation Along the way you’ll learn the three essential Google Sheets operations in n8n (read → append → update), see how to pass sheet data into an OpenAI node, and document each step with sticky-note instructions—perfect for anyone taking their first steps in no-code automation. 0️⃣ Prerequisites Google Sheets** Open Google Cloud Console → create / select a project. Enable Google Sheets API under APIs & Services. Create an OAuth Desktop credential and connect it in n8n. Share the spreadsheet with the Google account linked to the credential. OpenAI** Create a secret key at < In n8n → Credentials → New → choose OpenAI API and paste the key. Sample sheet to copy** (make your own copy and use its link) < 1️⃣ Trigger Manual Trigger – lets you run on demand while learning. (Swap for a Schedule or Webhook once you automate.) 2️⃣ Read existing rows Node:** Get Rows from Google Sheets Reads every row from Sheet1 of your copied file. 3️⃣ Generate a demo row Node:** Generate 1 Row of Data (Set node) Pretends a form was submitted: Name, Email, Topic, Submitted = "Yes" 4️⃣ Append the new row Node:** Append Data to Google Operation append → writes to the first empty line. 5️⃣ Create a description with GPT-4o OpenAI Chat Model – uses your OpenAI credential. Write description (AI Agent) – prompt = the Topic. Structured Output Parser – forces JSON like: { "description": "…" }. 6️⃣ Update that same row Node:** Update Sheets data Operation update. Matches on column Email to update the correct line. Writes the new Description cell returned by GPT-4o. 7️⃣ Why this matters Demonstrates the three core Google Sheets operations: read → append → update. Shows how to enrich sheet data with an AI step and push the result right back. Sticky Notes provide inline docs so anyone opening the workflow understands the flow instantly. 👤 Need help? Robert Breen – Automation Consultant ✉️ 🔗 < | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 4,
"node_types": [
"Google Sheets",
"AI Agent",
"OpenAI Chat Model",
"Structured Output Parser"
]
} | |
Create an n8n workflow for: Auto-Generate Virtual AI Try-On Images for WooCommerce with Gemini Nano Banana
Description: This workflow automates the creation of AI-generated virtual try-on images for fashion eCommerce stores. Instead of relying on expensive and time-consuming photoshoots, the system uses AI to generate realistic images of models wearing selected clothing items. This n8n workflow automates the process of generating AI-powered virtual try-on images for a WooCommerce store. It fetches product data from a Google Sheet, uses the Fal.ai Nano Banana model to create an image of a model wearing the clothing item, and then updates both the Google Sheet and the WooCommerce product with the final generated image. Advantages ✅ Cost Reduction: Eliminates the need for professional photo shoots, saving on models, photographers, and studio expenses. ✅ Time Efficiency: Automates the entire workflow—from data input to product update—minimizing manual work. ✅ Scalability: Works seamlessly across large product catalogs, making it easy to update hundreds of products quickly. ✅ Enhanced eCommerce Experience: Provides shoppers with realistic previews of clothing on models, boosting trust and conversion rates. ✅ Marketing Flexibility: The generated images can also be repurposed for ads, social media, and promotional campaigns. ✅ Centralized Management: Google Sheets acts as the control center, making it easy to manage inputs and track results. How It Works The workflow operates in a sequential, loop-based manner to process multiple products from a spreadsheet. Here is the logical flow: Manual Trigger & Data Fetch: The workflow starts manually (e.g., by clicking "Test workflow"). It first reads data from a specified Google Sheet, looking for rows where the "IMAGE RESULT" column is empty. Loop Processing: It loops over each row of data fetched from the sheet. Each row should contain URLs for a model image and a product image, along with a WooCommerce product ID. API Request to Generate Image: For each item in the loop, the workflow sends a POST request to the Fal.ai Nano Banana API. The request includes the two image URLs and a prompt instructing the AI to create a photo of the model wearing the submitted clothing item. Polling for Completion: The AI processing is asynchronous. The workflow enters a polling loop: it waits for 60 seconds and then checks the status of the processing request. If the status is not COMPLETED, it waits and checks again. This loop continues until the image is ready. Fetching and Storing the Result: Once the status is COMPLETED, the workflow retrieves the URL of the generated image, downloads the image file, and uploads it to a designated folder in Google Drive. Updating Systems: The workflow then performs two crucial update steps: It updates the original Google Sheet row, writing the URL of the final generated image into the "IMAGE RESULT" column. It updates the corresponding product in WooCommerce, adding the generated image to the product's gallery. Loop Continuation: After processing one item, the workflow loops back to process the next row in the Google Sheet until all items are complete. * Set Up Steps* To make this workflow functional, you need to configure three main connections: Step 1: Prepare the Google Sheet Create a Google Sheet with the following columns: IMAGE MODEL, IMAGE PRODUCT, PRODUCT ID, and IMAGE RESULT. Populate the first three columns for each product. The IMAGE RESULT column must be left blank; the workflow will fill it automatically. In the n8n workflow, configure the "Google Sheets" node to point to your specific Google Sheet and worksheet. Step 2: Configure the Fal.ai API Key Create an account at fal.ai and obtain your API key. In the n8n workflow, locate the three "HTTP Request" nodes named "Get Url image", "Get status", and "Create Image". Edit the credentials for these nodes (named "Fal.run API") and update the Value field in the Header Auth to be Key YOURAPIKEY (replacing YOURAPIKEY with your actual key). Step 3: Set Up WooCommerce API Ensure you have the API keys (Consumer Key and Consumer Secret) for your WooCommerce store's REST API. In the n8n workflow, locate the "WooCommerce" node. Edit its credentials and provide the required information: your store's URL and the API keys. This allows the workflow to authenticate and update your products. Need help customizing? Contact me for consulting and support or add me on Linkedin. | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.googleDrive",
"type": "Google Drive",
"category": [
"Data & Storage"
]
},
{
"name": "n8n-nodes-base.wooCommerce",
"type": "WooCommerce",
"category": [
"Sales"
]
}
],
"node_count": 4,
"node_types": [
"Google Sheets",
"HTTP Request",
"Google Drive",
"WooCommerce"
]
} | |
Create an n8n workflow for: 🎓 Learn Code Node (JavaScript) with an Interactive Hands-On Tutorial
Description: How it works This workflow is a hands-on tutorial for the Code node in n8n, covering both basic and advanced concepts through a simple data processing task. Provides Sample Data: The workflow begins with a sample list of users. Processes Each Item (Run Once for Each Item): The first Code node iterates through each user to calculate their fullName and age. This demonstrates basic item-by-item data manipulation using $input.item.json. Fetches External Data (Advanced): The second Code node showcases a more advanced feature. For each user, it uses the built-in this.helpers.httpRequest function to call an external API (genderize.io) to enrich the data with a predicted gender. Processes All Items at Once (Run Once for All Items): The third Code node receives the fully enriched list of users and runs only once. It uses $items() to access the entire list and calculate the averageAge, returning a single summary item. Create a Binary File: The final Code node gets the fully enriched list of users once again and creates a binary CSV file to show how to use binary data Buffer in JavaScript. Set up steps Setup time: < 1 minute This workflow is a self-contained tutorial and requires no setup. Explore the Nodes: Click on each of the Code nodes to read the code and the comments explaining each step, from basic to advanced. Run the Workflow: Click "Execute Workflow" to see it in action. Check the Output: Click on each node after the execution to see how the data is transformed at each stage. Notice how the data is progressively enriched. Experiment! Try changing the data in the 1. Sample Data node, or modify the code in the Code nodes to see what happens. | {
"nodes": [
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
}
],
"node_count": 1,
"node_types": [
"Code"
]
} | |
Create an n8n workflow for: RAG Starter Template using Simple Vector Stores, Form trigger and OpenAI
Description: This template quickly shows how to use RAG in n8n. Who is this for? This template is for everyone who wants to start giving knowledge to their Agents through RAG. Requirements Have a PDF with custom knowledge that you want to provide to your agent. Setup No setup required. Just hit Execute Workflow, upload your knowledge document and then start chatting. How to customize this to your needs Add custom instructions to your Agent by changing the prompts in it. Add a different way to load in knowledge to your vector store, e.g. by looking at some Google Drive files or loading knowledge from a table. Exchange the Simple Vector Store nodes with your own vector store tools ready for production. Add a more sophisticated way to rank files found in the vector store. For more information read our docs on RAG in n8n. | {
"nodes": [
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
"type": "Embeddings OpenAI",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.vectorStoreInMemory",
"type": "Simple Vector Store",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
"type": "Default Data Loader",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 5,
"node_types": [
"AI Agent",
"Embeddings OpenAI",
"OpenAI Chat Model",
"Simple Vector Store",
"Default Data Loader"
]
} | |
Create an n8n workflow for: Generate AI Videos from Telegram Messages with Nano Banana & Veo-3
Description: How to use the provided n8n workflow (step‑by‑step), what matters, what it’s good for, and costs per run. What this workflow does (in simple terms) 1) You write (or speak) your idea in Telegram. 2) The workflow builds two short prompts: Image prompt → generates one thumbnail via KIE.ai – Nano Banana (Gemini 2.5 Flash Image). Video prompt → starts a Veo‑3 (KIE.ai) video job using the thumbnail as init image. 3) You receive the thumbnail first, then the short video back in Telegram once rendering completes. Typical output: 1 PNG thumbnail + 1 short MP4 video (e.g., 8–12 s, 9:16). Why this is useful Rapid ideation**: Turn a quick text/voice idea into a ready‑to‑post thumbnail + matching short video. Consistent look: The video uses the thumbnail as **init image, keeping colors, objects and mood consistent. One chat = full pipeline**: Everything happens directly inside Telegram—no context switches. Agency‑ready**: Collect ideas from clients/team chats, and deliver outputs quickly. What you need before importing 1) KIE.ai account & API key Sign up/in at KIE.ai, go to Dashboard → API / Keys. Copy your KIE_API_KEY (keep it private). 2) Telegram Bot (BotFather) In Telegram, open @BotFather → command /newbot. Choose a name and a unique username (must end with bot). Copy your Bot Token (keep it private). 3) Your Telegram Chat ID (browser method) Send any message to your bot so you have a active chat Open Telegram web and the chat with the bot Find the chatid in the URL Import & minimal configuration (n8n) 1) Import the provided workflow JSON in n8n. 2) Create Credentials: Telegram API: paste your Bot Token. HTTP (KIE.ai): usually you’ll pass Authorization: Bearer {{ $env.KIE_API_KEY }} directly in the HTTP Request node headers, or make a generic HTTP credential that injects the header. 3) Replace hardcoded values in the template: Chat ID: use an Expression like {{$json.message.chat.id}} from the Telegram Trigger (prefer dynamic over hardcoded IDs). Authorization headers: never in query params—always in Headers. Content‑Type spelling: Content-Type (no typos). ` How to run it (basic flow) 1) Start the workflow (activate trigger). 2) Send a message to your bot, e.g. glass hourglass on a black mirror floor, minimal, elegant 3) The bot replies with the thumbnail (PNG), then the Veo‑3 video (MP4). If you send a voice message, the flow will download & transcribe it first, then proceed as above. Pricing (rule of thumb) Image (Nano Banana via KIE.ai):* ~ *$0.02–$0.04** per image (plan‑dependent). Video (Veo‑3 via KIE.ai):** Fast: $0.40 per 8 seconds ($0.05/s) Quality: $2.00 per 8 seconds ($0.25/s) Typical run (1 image + 8 s Fast video) ≈ $0.42–$0.44. > These are indicative values. Check your KIE.ai dashboard for the latest pricing/quotas. Why KIE.ai over the “classic” Google API? Cheaper in practice** for short video clips and image gen in this pipeline. One vendor** for both image & video (same auth, similar responses) = less integration hassle. Quick start**: Playground/tasks/status endpoints are n8n‑friendly for polling workflows. Security & reliability tips Never hardcode* API keys or Chat IDs into nodes—use *Credentials* or *Environment variables**. Add IF + error paths after each HTTP node: If status != 200 → Send friendly Telegram message (“Please try again”) + log to admin. If you use callback URLs for video completion, ensure the URL is publicly reachable (n8n Webhook URL). Otherwise, stick to polling. For rate limits, add a Wait node and limit concurrency in workflow settings. Keep aspect & duration consistent across prompt + API calls to avoid unexpected crops. Advanced: voice input (optional) The template supports voice via a Switch → Download → Transcribe (Whisper/OpenAI). Ensure your OpenAI credential is set and your n8n instance can fetch the audio file from Telegram. Example prompt patterns (keep it short & generic) Thumbnail prompt**: “Minimal, elegant, surreal [OBJECT], clean composition, 9:16” Video prompt**: “Cinematic [OBJECT]. slow camera move, elegant reflections, minimal & surreal mood, 9:16, 8–12s.” You can later replace the simple prompt builder with a dedicated LLM step or a fixed style guide for your brand. Final notes This template focuses on a solid, reliable pipeline first. You can always refine prompts later. Start with Veo‑3 Fast to keep iteration costs low; switch to Quality for final renders. Consider saving outputs (S3/Drive) and logging prompts/URLs to a sheet for audit & analytics. Questions or custom requests? 📩 | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.telegram",
"type": "Telegram",
"category": [
"Communication",
"HITL"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 6,
"node_types": [
"HTTP Request",
"Telegram",
"AI Agent",
"OpenAI Chat Model",
"Structured Output Parser",
"OpenAI"
]
} | |
Create an n8n workflow for: Clone Viral TikToks with AI Avatars & Auto-Post to 9 Platforms using Perplexity & Blotato
Description: Clone a viral TikTok with AI and auto-post it to 9 platforms using Perplexity & Blotato Who is this for? This workflow is perfect for: Content creators looking to repurpose viral content Social media managers who want to scale short-form content across multiple platforms Entrepreneurs and marketers aiming to save time and boost visibility with AI-powered automation What problem is this workflow solving? Reproducing viral video formats with your own branding and pushing them to multiple platforms is time-consuming and hard to scale. This workflow solves that by: Cloning a viral TikTok video’s structure Generating a new version with your avatar Rewriting the script, caption, and overlay text Auto-posting it to 9 social media platforms — without manual uploads What this workflow does From a simple Telegram message with a TikTok link, the workflow: Downloads a TikTok video and extracts its thumbnail, audio, and caption Transcribes the audio and saves original text into Google Sheets Uses Perplexity AI to suggest a new content idea in the same niche Rewrites the script, caption, and overlay using GPT-4o Generates a new video with your avatar using Captions.ai Adds subtitles and overlay text with JSON2Video Saves metadata to Google Sheets for tracking Sends the final video to Telegram for preview Auto-publishes the video to Instagram, YouTube, TikTok, Facebook, LinkedIn, Threads, X (Twitter), Pinterest, and Bluesky via Blotato Setup Connect your Telegram bot to the trigger node. Add your OpenAI, Perplexity, Cloudinary, Captions.ai, and Blotato API keys. Make sure your Google Sheet is ready with the appropriate columns. Replace the default avatar name in the Captions.ai node with yours. Fill in your social media account IDs in the "Assign Platform IDs" node. Test by sending a TikTok URL to your Telegram bot. How to customize this workflow to your needs Change avatar output style**: adjust resolution, voice, or avatar ID. Refine script structure**: tweak GPT instructions for different tone/format. Swap Perplexity with ChatGPT or Claude** if needed. Filter by platform**: disable any Blotato nodes you don’t need. Add approval step**: insert a Telegram confirmation node before publishing. Adjust subtitle style or overlay text font** in JSON2Video. 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.telegram",
"type": "Telegram",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.openAi",
"type": "OpenAI",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 5,
"node_types": [
"Google Sheets",
"HTTP Request",
"Telegram",
"Code",
"OpenAI"
]
} | |
Create an n8n workflow for: Customer Support WhatsApp Bot with Google Docs Knowledge Base and Gemini AI
Description: Document-Aware WhatsApp AI Bot for Customer Support Google Docs-Powered WhatsApp Support Agent 24/7 WhatsApp AI Assistant with Live Knowledge from Google Docs 📝Description Template Smart WhatsApp AI Assistant Using Google Docs Help customers instantly on WhatsApp using a smart AI assistant that reads your company’s internal knowledge from a Google Doc in real time. Built for clubs, restaurants, agencies, or any business where clients ask questions based on a policy, FAQ, or services document. ⚙️ How it works Users send free-form questions to your WhatsApp Business number (e.g. “What are the gym rules?” or “Are you open today?”) The bot automatically reads your company’s internal Google Doc (policy, schedule, etc.) It merges the document content with today’s date and the user’s question to craft a custom AI prompt The AI (Gemini or ChatGPT) then replies back on WhatsApp using natural, helpful language All conversations are logged to Google Sheets for reporting or audit > 💡Bonus: The AI even understands dates inside the document and compares them to today’s date — e.g. if your document says “Closed May 25 for 30 days,” it will say “We're currently closed until June 24. 🧰 Set up steps Connect your WhatsApp Cloud API account (Meta) Add your Google account and grant access to the Doc containing your company info Choose your AI model (ChatGPT/OpenAI or Gemini) Paste your document ID into the Google Docs node Connect your WhatsApp webhook to Meta (only takes 5 minutes) Done — start receiving and answering customer questions! > 📄 Works best with free-tier OpenAI/Gemini, Google Docs, and Meta's Cloud API (no phone required). Everything is modular, extensible, and low-code. 🔄 Customization Tips Change the Google Doc anytime to update answers — no retraining needed Add your logo and business name in the AI agent’s “System Prompt” Add fallback routes like “Escalate to human” if the bot can't help Clone for multiple brands by duplicating the workflow and swapping in new docs 🤝 Need Help Setting It Up? If you'd like help connecting your WhatsApp Business API, setting up Google Docs access, or customizing this AI assistant for your business or clients… 📩 I offer setup, branding, and customization services: WhatsApp Cloud API setup & verification Google OAuth & Doc structure guidance AI model configuration (OpenAI / Gemini) Branding & prompt tone customization Logging, reporting, and escalation logic Just send a message via: Email: WhatsApp: +20 106 180 3236 | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.googleDocs",
"type": "Google Docs",
"category": [
"Miscellaneous"
]
},
{
"name": "n8n-nodes-base.whatsApp",
"type": "WhatsApp Business Cloud",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"type": "Simple Memory",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 7,
"node_types": [
"Google Sheets",
"Google Docs",
"WhatsApp Business Cloud",
"Code",
"AI Agent",
"Simple Memory",
"Google Gemini Chat Model"
]
} | |
Create an n8n workflow for: Nano Banana AI Product Image Creator via WhatsApp
Description: Nano Banana AI Product Image Creator via WhatsApp Transform ordinary product photos into premium marketing visuals instantly using Gemini AI for prompt enhancement and Nano Banana AI for image generation through WhatsApp. Who's it for Small business owners E-commerce sellers Social media managers Anyone selling products online What it does • Takes your normal product photo • Gets your text/caption as input • Gemini AI improves your basic prompt into professional instructions • Nano Banana AI generates new premium ad image using enhanced prompt • Keeps your original product exactly the same • Returns social media-ready marketing image Key Benefits Smart Prompt Enhancement - Gemini AI makes your simple text into professional prompts Superior Image Generation - Nano Banana AI creates premium visuals Original Product Protected - Your product stays 100% unchanged Fast Results - Get enhanced image in under 60 seconds Easy to Use - Just send photo via WhatsApp Social Media Ready - Perfect for Instagram, Facebook, ads How it works Send Photo + Caption → Upload product image with your text AI Prompt Magic → Gemini AI turns your caption into professional prompt Image Generation → Nano Banana AI creates new premium ad using improved prompt Receive Result → Get marketing-ready image back instantly The Magic Process Your Input: "Make this look premium for Instagram" Gemini AI Enhanced Prompt: "Create luxury product advertisement with professional studio lighting, marble background, elegant typography, commercial photography style, high-end visual appeal..." Nano Banana AI Generated Result: Professional marketing image created with your improved prompt What you get • Better Prompts - Gemini AI turns simple text into detailed instructions • Professional Images - Studio-quality marketing visuals generated by Nano Banana AI • Same Product - Your original item stays unchanged • Quick Results - Ready in under 60 seconds Perfect for Instagram posts Facebook ads Online store images Social media marketing Why This Combo Works Best Gemini AI** - Expert at understanding and enhancing text prompts Nano Banana AI** - Specialized for high-quality product image generation Best of Both** - Combines prompt expertise with image generation power Faster Results** - Optimized dual-AI workflow Perfect solution for creating professional product ads from simple WhatsApp messages using the power of both Gemini AI and Nano Banana AI. | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.whatsApp",
"type": "WhatsApp Business Cloud",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "@n8n/n8n-nodes-langchain.googleGemini",
"type": "Google Gemini",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 4,
"node_types": [
"HTTP Request",
"WhatsApp Business Cloud",
"Code",
"Google Gemini"
]
} | |
Create an n8n workflow for: Gmail AI Email Manager
Description: Want to check out all my flows, follow me on: Email Manager - Intelligent Gmail Classification This automation flow is designed to automatically monitor incoming Gmail messages, analyze their content and context using AI, and intelligently classify them with appropriate labels for better email organization and prioritization. ⚙️ How It Works (Step-by-Step): 📧 Gmail Monitoring (Trigger) Continuously monitors your Gmail inbox: Polls for new emails every minute Captures all incoming messages automatically Triggers workflow for each new email received 📖 Email Content Extraction Retrieves complete email details: Full email body and headers Sender information and recipient lists Subject line and metadata Existing Gmail labels and categories Email threading information (replies/forwards) 🔍 Email History Analysis AI agent checks relationship context: Searches for previous emails from the same sender Checks sent folder for prior outbound correspondence Determines if this is a first-time contact (cold email) Analyzes conversation thread history 🤖 Intelligent Classification Agent Advanced AI categorization using: Claude Sonnet 4 for sophisticated email analysis Context-aware classification based on email history Content analysis for intent and urgency detection Header analysis for automated vs. human-sent emails 🏷️ Smart Label Assignment Automatically applies appropriate Gmail labels: To Respond: Requires direct action/reply FYI: For awareness, no action needed Notification: Service updates, policy changes Marketing: Promotional content and sales pitches Meeting Update: Calendar-related communications Comment: Document/task feedback 📋 Structured Processing Ensures consistent labeling: Uses structured output parsing for reliability Returns specific Label ID for Gmail integration Applies label automatically to the email Maintains classification accuracy 🛠️ Tools Used: n8n: Workflow automation platform Gmail API: Email monitoring and label management Anthropic Claude: Advanced email content analysis Gmail Tools: Email history checking and search Structured Output Parser: Consistent AI responses 📦 Key Features: Real-time email monitoring and classification Context-aware analysis using email history Intelligent cold vs. warm email detection Multiple classification categories for organization Automatic Gmail label application Header analysis for automated email detection Thread-aware conversation tracking 🚀 Ideal Use Cases: Busy executives managing high email volumes Sales professionals prioritizing prospect communications Support teams organizing customer inquiries Marketing teams filtering promotional content Anyone wanting automated email organization Teams needing consistent email prioritization ` | {
"nodes": [
{
"name": "n8n-nodes-base.gmail",
"type": "Gmail",
"category": [
"Communication",
"HITL"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatAnthropic",
"type": "Anthropic Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 4,
"node_types": [
"Gmail",
"AI Agent",
"Anthropic Chat Model",
"Structured Output Parser"
]
} | |
Create an n8n workflow for: Generate Personalized Sales Emails with LinkedIn Data & Claude 3.7 via OpenRouter
Description: How it works The automation loads rows from a Google Sheet of leads that you want to contact. It makes a Google search via Apify for LinkedIn links based on the First name / Last name / Company. Another Apify actor fetches the right LinkedIn profile based on the first profile which is retuned The same process is done for the company that the lead works for, giving extra context. If the lead has a current company listed on their LinkedIn, we use that URL to do the lookup, rather than doing a separate Google search. A call is made to OpenRouter to get an LLM to generate an email based on a prompt designed to do personalized outreach. An email is sent via a Gmail node. Set up steps Connect your Google Sheets + Gmail accounts to use these APIs. Make an account with Apify and enter your credentials. Set your details in the "Set My Data" node to customize the workflow to revolve around your company + value proposition. I would recommend changing the prompt in the "Generate Personalized Email" node to match the tone of voice that you want your agent to have. You can change the guidelines to e.g. change whether the agent introduces itself, and give more examples in the style you want to make the output better. | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.gmail",
"type": "Gmail",
"category": [
"Communication",
"HITL"
]
},
{
"name": "@n8n/n8n-nodes-langchain.chainLlm",
"type": "Basic LLM Chain",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenRouter",
"type": "OpenRouter Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 6,
"node_types": [
"Google Sheets",
"HTTP Request",
"Gmail",
"Basic LLM Chain",
"Structured Output Parser",
"OpenRouter Chat Model"
]
} | |
Create an n8n workflow for: Automated Competitor Pricing Monitor with Bright Data MCP & OpenAI
Description: This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically monitors competitor pricing changes and website updates to keep you informed of market movements. It saves you time by eliminating the need to manually check competitor websites and provides alerts only when actual changes occur, preventing information overload. Overview This workflow automatically scrapes competitor pricing pages (like ClickUp) and compares current pricing with previously stored data. It uses Bright Data to access competitor websites without being blocked and AI to intelligently extract pricing information, updating your tracking spreadsheet only when changes are detected. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping competitor websites without being blocked OpenAI**: AI agent for intelligent pricing data extraction and parsing Google Sheets**: For storing and comparing historical pricing data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your pricing tracking spreadsheet Customize: Set your competitor URLs and pricing monitoring schedule Use Cases Product Teams**: Monitor competitor feature and pricing changes for strategic planning Sales Teams**: Stay informed of competitor pricing to adjust sales strategies Marketing Teams**: Track competitor messaging and positioning changes Business Intelligence**: Build comprehensive competitor analysis databases Connect with Me Website**: YouTube**: LinkedIn**: Get Bright Data**: (Using this link supports my free workflows with a small commission) #n8n #automation #competitoranalysis #pricingmonitoring #brightdata #webscraping #competitortracking #marketintelligence #n8nworkflow #workflow #nocode #pricetracking #businessintelligence #competitiveanalysis #marketresearch #competitormonitoring #pricingdata #websitemonitoring #competitorpricing #marketanalysis #competitorwatch #pricingalerts #businessautomation #competitorinsights #markettrends #pricingchanges #competitorupdates #strategicanalysis #marketposition #competitiveintelligence | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "@n8n/n8n-nodes-langchain.agent",
"type": "AI Agent",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserAutofixing",
"type": "Auto-fixing Output Parser",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.outputParserStructured",
"type": "Structured Output Parser",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 5,
"node_types": [
"Google Sheets",
"AI Agent",
"OpenAI Chat Model",
"Auto-fixing Output Parser",
"Structured Output Parser"
]
} | |
Create an n8n workflow for: Lead Generation Agent
Description: Who this is for This workflow is for digital marketing agencies or sales teams who want to automatically find business leads based on industry & location, gather their contact details, and send personalized cold emails — all from one form submission. What this workflow does This workflow starts every time someone submits the Lead Machine Form. It then: Scrapes business data* (company name, website, phone, address, category) using *Apify** based on business type & location. Extracts the best email address* from each business website using *Google Gemini AI**. Stores valid leads* in *Google Sheets**. Generates cold email content** (subject + body) with AI based on your preferred tone (Friendly, Professional, Simple). Sends the cold email** via Gmail. Updates the sheet** with send status & timestamp. Setup To set this workflow up: Form Trigger – Customize the “Lead Machine” form fields if needed (Business Type, Location, Lead Number, Email Style). Apify API – Add your Apify Actor Endpoint URL in the HTTP Request node. Google Gemini – Add credentials for extracting email addresses. Google Sheets – Connect your sheet for storing leads & email status. OpenAI – Add your credentials for cold email generation. Gmail – Connect your Gmail account for sending cold emails. How to customize this workflow to your needs Change the AI email prompt to reflect your brand’s voice and offer. Add filters to only target leads that meet specific criteria (e.g., website must exist, email must be verified). Modify the Google Sheets structure to track extra info like “Follow-up Date” or “Lead Source”. Switch Gmail to another email provider if preferred. | {
"nodes": [
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.gmail",
"type": "Gmail",
"category": [
"Communication",
"HITL"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"type": "OpenAI Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.informationExtractor",
"type": "Information Extractor",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 6,
"node_types": [
"Google Sheets",
"HTTP Request",
"Gmail",
"OpenAI Chat Model",
"Google Gemini Chat Model",
"Information Extractor"
]
} | |
Create an n8n workflow for: Extract & Transform HackerNews Data to Google Docs using Gemini 2.0 Flash
Description: Description This workflow automates the process of scraping the latest discussions from HackerNews, transforming raw threads into human readable content using Google Gemini, and exporting the final content into a well-formatted Google Doc. Overview This n8n workflow is responsible for extracting trending posts from the HackerNews API. It loops through each item, performs HTTP data extraction, utilizes Google Gemini to generate human-readable insights, and then exports the enriched content into Google Docs for distribution, archiving, or content creation. Who this workflow is for Tech Newsletter Writers**: Automate the collection and summarization of trending HackerNews posts for inclusion in weekly or daily newsletters. Content Creators & Bloggers**: Quickly generate structured summaries and insights from HackerNews threads to use as inspiration or supporting content for blog posts, videos, or social media. Startup Founders & Product Builders**: Monitor HackerNews for discussions relevant to your niche or competitors, and keep a pulse on the community’s opinions. Investors & Analysts**: Surface early signals from the tech ecosystem by identifying what’s trending and how the community is reacting. Researchers & Students**: Analyze popular discussions and emerging trends in technology, programming, and startups—enriched with AI-generated insights. Digital Agencies & Consultants**: Offer HackerNews monitoring and insight reports as a value-added service to clients interested in the tech space. Tools Used n8n**: The core automation engine that manages the trigger, transformation, and export. HackerNews API**: Provides access to trending or new HN posts. Google Gemini**: Enriches HackerNews content with structured insights and human-like summaries. Google Docs**: Automatically creates and updates a document with the enriched content, ready for sharing or publishing. How to Install Import the Workflow**: Download the .json file and import it into your n8n instance. Set Up HackerNews Source**: Choose whether to use the HN API (via HTTP Request node) or RSS Feed node. Configure Gemini API**: Add your Google Gemini API key and design the prompt to extract pros/cons, key themes, or insights. Set Up Google Docs Integration**: Connect your Google account and configure the Google Docs node to create/update a document. Test and Deploy**: Run a test job to ensure data flows correctly and outputs are formatted as expected. Use Cases Tech Newsletter Authors**: Generate ready-to-use summaries of trending HackerNews threads. Startup Founders**: Stay informed on key discussions, product launches, and community feedback. Investors & Analysts**: Spot early trends, technical insights, and startup momentum directly from HN. Researchers**: Track community reactions to new technologies or frameworks. Content Creators**: Use the enriched data to spark blog posts, YouTube scripts, or LinkedIn updates. Connect with Me Email: LinkedIn: Get Bright Data: Bright Data (Supports free workflows with a small commission) #n8n #automation #hackernews #contentcuration #aiwriting #geminiapi #googlegemini #techtrends #newsletterautomation #googleworkspace #rssautomation #nocode #structureddata #webscraping #contentautomation #hninsights #aiworkflow #googleintegration #webmonitoring #hnnews #aiassistant #gdocs #automationtools #gptlike #geminiwriter | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.hackerNews",
"type": "Hacker News",
"category": [
"Communication",
"Marketing"
]
},
{
"name": "n8n-nodes-base.googleDocs",
"type": "Google Docs",
"category": [
"Miscellaneous"
]
},
{
"name": "@n8n/n8n-nodes-langchain.chainLlm",
"type": "Basic LLM Chain",
"category": [
"AI",
"Langchain"
]
},
{
"name": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"type": "Google Gemini Chat Model",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 5,
"node_types": [
"HTTP Request",
"Hacker News",
"Google Docs",
"Basic LLM Chain",
"Google Gemini Chat Model"
]
} | |
Create an n8n workflow for: Process Large Documents with OCR using SubworkflowAI and Gemini
Description: Working with Large Documents In Your VLM OCR Workflow Document workflows are popular ways to use AI but what happens when your document is too large for your app or your AI to handle? Whether its context window or application memory that's grinding to a halt, Subworkflow.ai is one approach to keep you going. > Subworkflow.ai is a third party API service to help AI developers work with documents too large for context windows and runtime memory. Prequisites You'll need a Subworkflow.ai API key to use the Subworkflow.ai service. Add the API key as a header auth credential. More details in the official docs How it Works Import your document into your n8n workflow Upload it to the Subworkflow.ai service via the Extract API using the HTTP node. This endpoint takes files up to 100mb. Once uploaded, this will trigger an Extract job on the service's side and the response is a "job" record to track progress. Poll Subworkflow.ai's Jobs endpoint and keep polling until the job is finished. You can use the "IF" node looping back unto itself to achieve this in n8n. Once the job is done, the Dataset of the uploaded document is ready for retrieval. Use the Datasets and DatasetItems API to retrieve whatever you need to complete your AI task. In this example, all pages are retrieved and run through a multimodal LLM to parse into markdown. A well-known process when parsing data tables or graphics are required. How to use Integrate Subworkflow's Extract API seemlessly into your existing document workflows to support larger documents from 100mb+ to up to 5000 pages. Customising the workflow Sometimes you don't want the entire document back especially if the document is quite large (think 500+ pages!), instead, use query parameters on the DatasetItems API to pick individual pages or a range of pages to reduce the load. Need Help? Official API documentation**: Join the discord**: | {
"nodes": [
{
"name": "n8n-nodes-base.httpRequest",
"type": "HTTP Request",
"category": [
"Development",
"Core Nodes"
]
},
{
"name": "n8n-nodes-base.googleDrive",
"type": "Google Drive",
"category": [
"Data & Storage"
]
},
{
"name": "@n8n/n8n-nodes-langchain.googleGemini",
"type": "Google Gemini",
"category": [
"AI",
"Langchain"
]
}
],
"node_count": 3,
"node_types": [
"HTTP Request",
"Google Drive",
"Google Gemini"
]
} | |
Create an n8n workflow for: PPC Campaign Intelligence & Optimization with Google Ads, Sheets & Slack
Description: How it Works This workflow automatically monitors your Google Ads campaigns every day, analyzing performance with AI-powered scoring to identify scaling opportunities and catch issues before they drain your budget. Each morning at 9 AM, it fetches all active campaign data including clicks, impressions, conversions, costs, and conversion rates from your Google Ads account. The AI analysis engine evaluates four critical dimensions: CTR (click-through rate) to measure ad relevance, conversion rate to assess landing page effectiveness, cost per conversion to evaluate profitability, and traffic volume to identify scale-readiness. Each campaign receives a performance score (0-100 points) and is automatically categorized as Excellent (75+), Good (55-74), Fair (35-54), or Underperforming (0-34). High-performing campaigns trigger instant Slack alerts to your PPC team with detailed scaling recommendations and projected ROI improvements, while underperforming campaigns generate urgent alerts with specific optimization actions. Every campaign is logged to your Google Sheets dashboard with daily metrics, and the system generates personalized email reports—action-oriented scaling plans for top performers and troubleshooting guides for campaigns needing attention. The entire analysis takes minutes, providing your team with daily intelligence reports that would otherwise require hours of manual spreadsheet work and data analysis. Who is this for? PPC managers and paid media specialists drowning in campaign data and manual reporting Marketing agencies managing multiple client accounts needing automated performance monitoring E-commerce brands running high-spend campaigns who can't afford budget waste Growth teams looking to scale winners faster and pause losers immediately Anyone spending $5K+ monthly on Google Ads who needs data-driven optimization decisions Setup Steps Setup time:** Approx. 15-25 minutes (credential configuration, dashboard setup, alert customization) Requirements:** Google Ads account with active campaigns Google account with a tracking spreadsheet Slack workspace SMTP email provider (Gmail, SendGrid, etc.) Create a Google Sheets dashboard with two tabs: "Daily Performance" and "Campaign Log" with appropriate column headers. Set up these nodes: Schedule Daily Check: Pre-configured to run at 9 AM daily (adjust timing if needed). Fetch Google Ads Data: Connect your Google Ads account and authorize API access. AI Performance Analysis: Review scoring thresholds (CTR, conversion rate, cost benchmarks). Route by Performance: Automatically splits campaigns into high-performers vs. issues. Update Campaign Dashboard: Connect Google Sheets and select your "Daily Performance" tab. Log All Campaigns: Select your "Campaign Log" tab for historical tracking. Slack Alerts: Connect workspace and configure separate channels for scaling opportunities and performance issues. Generate Action Plan: Customize email templates with your brand voice and action items. Email Performance Report: Configure SMTP and set recipient email addresses. Credentials must be entered into their respective nodes for successful execution. Customization Guidance Scoring Weights:** Adjust point values for CTR (30), conversion rate (35), cost efficiency (25), and volume (10) in the AI Performance Analysis node based on your business priorities. Performance Thresholds:** Modify the 75-point Excellent threshold and 55-point Good threshold to match your campaign quality distribution and industry benchmarks. Benchmark Values:** Update CTR benchmarks (5% excellent, 3% good, 1.5% average) and conversion rate targets (10%, 5%, 2%) for your industry. Alert Channels:** Create separate Slack channels for different alert types or route critical alerts to Microsoft Teams, Discord, or SMS via Twilio. Email Recipients:** Configure different recipient lists for scaling alerts (executives, growth team) vs. optimization alerts (campaign managers). Schedule Frequency:** Change from daily to hourly monitoring for high-spend campaigns, or weekly for smaller accounts. Additional Platforms:** Duplicate the workflow structure for Facebook Ads, Microsoft Ads, or LinkedIn Ads with platform-specific nodes. Budget Controls:** Add nodes to automatically pause campaigns exceeding cost thresholds or adjust bids based on performance scores. Once configured, this workflow will continuously monitor your ad spend, identify opportunities worth thousands in additional revenue, and alert you to issues before they waste your budget—transforming manual reporting into automated intelligence. Built by Daniel Shashko Connect on LinkedIn | {
"nodes": [
{
"name": "n8n-nodes-base.emailSend",
"type": "Send Email",
"category": [
"Communication",
"Core Nodes",
"HITL"
]
},
{
"name": "n8n-nodes-base.googleSheets",
"type": "Google Sheets",
"category": [
"Data & Storage",
"Productivity"
]
},
{
"name": "n8n-nodes-base.slack",
"type": "Slack",
"category": [
"Communication",
"HITL"
]
},
{
"name": "n8n-nodes-base.googleAds",
"type": "Google Ads",
"category": [
"Analytics"
]
},
{
"name": "n8n-nodes-base.code",
"type": "Code",
"category": [
"Development",
"Core Nodes"
]
}
],
"node_count": 5,
"node_types": [
"Send Email",
"Google Sheets",
"Slack",
"Google Ads",
"Code"
]
} |
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Add n8n workflow training datasets (3 formats)
This dataset contains 4,000+ training examples extracted from 6,837 publicly available n8n workflows from the n8n marketplace. The data is designed for fine-tuning Large Language Models to generate n8n workflow configurations from natural language descriptions.
Dataset Contents
Three format variations:
training_data_alpaca.json - Alpaca format for Llama/Mistral models
- Format: instruction-input-output triplets
- Use case: Fine-tuning with Unsloth, Axolotl, or similar frameworks
training_data_openai.jsonl - OpenAI format for GPT models
- Format: messages array with system/user/assistant roles
- Use case: OpenAI fine-tuning API
training_data_simple.json - Simplified format
- Format: Basic instruction-output pairs
- Use case: Custom training pipelines or quick prototyping
Data Statistics
- Total examples: 4,000+
- Source workflows: 6,837 from n8n marketplace
- Coverage: Diverse workflow types (AI, automation, integrations, data processing)
- Quality: Cleaned, validated, and structured
Sample Format (Alpaca)
{
"instruction": "Create an n8n workflow for: AI Email Assistant",
"input": "",
"output": {
"name": "AI Email Assistant",
"nodes": [
{"type": "Gmail Trigger"},
{"type": "OpenAI Chat Model"},
{"type": "Gmail"}
],
"node_count": 3,
"categories": ["AI", "Communication"]
}
}
Use Case
This dataset was used to successfully fine-tune Llama 3 8B for n8n workflow generation. The resulting model can generate valid workflow configurations from natural language descriptions.
Training Results
- Model: Llama 3 8B (4-bit quantized)
- Training time: 55 minutes on A100 GPU
- Final loss: 1.235900
- Inference quality: Production-ready
Data Collection Methodology
- Scraped n8n marketplace via public API
- Extracted workflow metadata and node structures
- Generated instruction-output pairs
- Validated JSON structure and data quality
- Formatted for multiple training frameworks
License & Attribution
- Source: n8n marketplace (public workflows)
- Created by: Mustapha Liaichi
- Project: n8n Workflow Generator
- Website: n8nlearninghub.com
- GitHub: MuLIAICHI
Citation
If you use this dataset, please cite:
@dataset{liaichi2024n8nworkflows,
author = {Mustapha Liaichi},
title = {n8n Workflow Training Dataset},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/MustaphaL/n8n-workflow-training-data}
}
Related Resources
- Model: MustaphaL/n8n-workflow-generator
- Analysis: "What Are People Actually Building in n8n?" (Medium)
- Tool: n8n Marketplace Analyzer (Apify)
- Community: r/n8nLearningHub
Future Updates
This dataset may be updated periodically with:
- Additional workflows from marketplace
- Enhanced metadata and categorization
- Multi-language workflow descriptions
- Advanced workflow patterns
- For questions or collaboration: mustaphaliaichi@gmail.com
- Check the Actor on Apify for fresh data training : Get fresh data
- Check the full story : read me
license: apache-2.0
- Downloads last month
- 20