I Broke My Own Workflow Engine at Scale — Here's How I Fixed It how's the title
In my last post, I broke down how I built FlowForge, a fault-tolerant DAG workflow engine using ASP.NET Core, React, and MySQL. I explained how I solved complex branching and dependency execution u...

Source: DEV Community
In my last post, I broke down how I built FlowForge, a fault-tolerant DAG workflow engine using ASP.NET Core, React, and MySQL. I explained how I solved complex branching and dependency execution using Kahn's Algorithm and a database-backed state machine. It works perfectly for hundreds of concurrent users. But as engineers, we always have to ask the dangerous question: What happens when we 1000x the load? Let's deep dive into the absolute limits of my current architecture, watch it break, and re-architect it to handle a massive scale: 1,000,000 flow executions per second. How the V1 Engine Works (And Why It Will Fail) To understand how to scale the engine, you need to understand what it's currently doing. Storage & Parsing: When a user builds a flow in the React frontend, we save the entire raw JSON (including UI viewports and node coordinates) into the definitionJson column of our Flow table. When execution starts, we parse this JSON into a backend-friendly ParsedFlow (a strict l