I Built a CLI AI Coding Assistant from Scratch — Here's What I Learned
I Built a CLI AI Coding Assistant from Scratch — Here's What I Learned TL;DR: I spent several months studying Claude Code's architecture, then built Seed AI — a TypeScript CLI assistant with 14 ori...

Source: DEV Community
I Built a CLI AI Coding Assistant from Scratch — Here's What I Learned TL;DR: I spent several months studying Claude Code's architecture, then built Seed AI — a TypeScript CLI assistant with 14 original improvements. This post is a brain dump of the most interesting technical problems I solved. Why bother? Claude Code is excellent. But it has a few hard constraints: Locked to Anthropic's API (no DeepSeek, no local Ollama) No memory between sessions No tool result caching (reads the same file 3 times per session) No Docker sandbox None of these are complaints — they're product decisions. But they left room to explore. The interesting problems 1. Parallel tool execution without breaking UX Claude Code executes tools serially: permission(A) → exec(A) → permission(B) → exec(B). When the LLM requests 3 file reads simultaneously, that's 3× the latency. The naive fix is full parallelism — but then you'd get 3 permission dialogs firing at once, which is confusing. The right split: // Permissio