How We Run Client Projects with AI
Our workflow for shipping client software using AI-native development. Task scoping, prompt libraries, and quality gates.
We've delivered nine client projects using AI-assisted development. Not all of them went smoothly. Here's the process we've refined.
Task scoping
Every feature gets broken into tasks that take no more than two hours. This isn't arbitrary — it's the sweet spot where AI-generated code stays coherent and reviewable.
Prompt libraries
We maintain a shared prompt library per project. Standard prompts for component generation, API endpoints, test writing, and refactoring. This ensures consistency across team members.
The review loop
Every AI-generated piece of code gets a human review focused on three things: correctness, security, and maintainability. We skip style nits — the AI handles those well enough.
Quality gates
We run automated tests, type checking, and a build verification on every commit. AI-generated code fails these checks about 15% of the time. That's acceptable — the time saved on initial generation more than compensates.
What we've learned
AI doesn't eliminate the need for engineering judgment. It shifts the work from writing to reviewing. The teams that succeed are the ones that invest in their review process.