MustaphaL's picture
Update README.md
47a8d66 verified

Add n8n workflow training datasets (3 formats)

This dataset contains 4,000+ training examples extracted from 6,837 publicly available n8n workflows from the n8n marketplace. The data is designed for fine-tuning Large Language Models to generate n8n workflow configurations from natural language descriptions.

Dataset Contents

Three format variations:

  1. training_data_alpaca.json - Alpaca format for Llama/Mistral models

    • Format: instruction-input-output triplets
    • Use case: Fine-tuning with Unsloth, Axolotl, or similar frameworks
  2. training_data_openai.jsonl - OpenAI format for GPT models

    • Format: messages array with system/user/assistant roles
    • Use case: OpenAI fine-tuning API
  3. training_data_simple.json - Simplified format

    • Format: Basic instruction-output pairs
    • Use case: Custom training pipelines or quick prototyping

Data Statistics

  • Total examples: 4,000+
  • Source workflows: 6,837 from n8n marketplace
  • Coverage: Diverse workflow types (AI, automation, integrations, data processing)
  • Quality: Cleaned, validated, and structured

Sample Format (Alpaca)

{
  "instruction": "Create an n8n workflow for: AI Email Assistant",
  "input": "",
  "output": {
    "name": "AI Email Assistant",
    "nodes": [
      {"type": "Gmail Trigger"},
      {"type": "OpenAI Chat Model"},
      {"type": "Gmail"}
    ],
    "node_count": 3,
    "categories": ["AI", "Communication"]
  }
}

Use Case

This dataset was used to successfully fine-tune Llama 3 8B for n8n workflow generation. The resulting model can generate valid workflow configurations from natural language descriptions.

Training Results

  • Model: Llama 3 8B (4-bit quantized)
  • Training time: 55 minutes on A100 GPU
  • Final loss: 1.235900
  • Inference quality: Production-ready

Data Collection Methodology

  1. Scraped n8n marketplace via public API
  2. Extracted workflow metadata and node structures
  3. Generated instruction-output pairs
  4. Validated JSON structure and data quality
  5. Formatted for multiple training frameworks

License & Attribution

  • Source: n8n marketplace (public workflows)
  • Created by: Mustapha Liaichi
  • Project: n8n Workflow Generator
  • Website: n8nlearninghub.com
  • GitHub: MuLIAICHI

Citation

If you use this dataset, please cite:

@dataset{liaichi2024n8nworkflows,
  author = {Mustapha Liaichi},
  title = {n8n Workflow Training Dataset},
  year = {2024},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/MustaphaL/n8n-workflow-training-data}
}

Related Resources

  • Model: MustaphaL/n8n-workflow-generator
  • Analysis: "What Are People Actually Building in n8n?" (Medium)
  • Tool: n8n Marketplace Analyzer (Apify)
  • Community: r/n8nLearningHub

Future Updates

This dataset may be updated periodically with:

  • Additional workflows from marketplace
  • Enhanced metadata and categorization
  • Multi-language workflow descriptions
  • Advanced workflow patterns


license: apache-2.0