AI Powered
Every phase of our work is powered by AI tooling. Not AI-assisted — AI-powered. Claude Code is the engine. Agent-browsers, custom skills, and automated pipelines do the work that used to require a full team.
Discovery
Automated research, user testing, and design exploration that produces spec-ready assets.
Key Activities
- Automated competitor research using agent-browsers
- User testing powered by custom tooling
- Design exploration and iteration
- Strategy sessions tied to commercial metrics
Tools
Deliverable
Structured assets ready for specification
Claude Code
Claude Code is the core of everything we do. It reads specifications, generates prototypes, decomposes codebases, and writes documentation. We use it across all three phases.
In discovery: Claude Code structures research outputs, synthesizes competitive data, and generates briefs from raw inputs.
In specifications: Claude Code reads existing codebases and reverse-engineers them into documented specs. It maps data models, extracts business rules, and produces visual specifications.
In production: Claude Code generates working prototypes directly from specifications. The output matches the spec because the spec drove the build.
Agent-Browsers
Agent-browsers are headless browsers controlled by AI. They navigate websites, capture data, take screenshots, and extract structured information autonomously.
We use them for automated competitor research:
Task: Research competitor landscape for property management SaaS
Tools: agent-browser, Claude Code
1. Agent-browser navigates to each competitor's marketing site
2. Captures pricing pages, feature lists, integration pages
3. Takes screenshots of key flows and UI patterns
4. Claude Code structures the data into a comparison matrix
Output: Structured competitive analysis with screenshots and data
Time: Hours, not weeks
Agent-browsers also power automated user testing — navigating prototypes, testing flows, and reporting broken interactions before a human ever sees them.
Custom Skills
Skills are reusable Claude Code instructions scoped to specific tasks. They encode our methods so the work is consistent and repeatable.
Examples of skills we use:
- Spec extraction — reads a codebase and produces a product specification. Maps database schema, API endpoints, business rules, and UI flows into a structured document.
- Competitor analysis — takes a list of competitors and produces a comparison matrix with features, pricing, positioning, and gaps.
- Prototype generation — reads a specification and generates a working Next.js prototype with data models, page layouts, and user flows.
- Design audit — analyses a UI against accessibility, consistency, and usability criteria.
Skills are the difference between "use Claude Code" and "use Claude Code well." They capture what we've learned about prompting, structure, and output quality.
Code Decomposition
One of our most valuable methods. We take an existing codebase and decompose it into a product specification.
Task: Create specification from existing codebase
Tools: Claude Code with spec-extraction skill
1. Analyze database schema — extract entities, relationships, constraints
2. Map API endpoints to user-facing features
3. Document business rules from application logic
4. Trace UI components to data flows
5. Generate visual specification with screens, flows, and states
Output: Complete product specification from working code
This powers our product audit service. Clients with existing products get a documented spec of what they have — often for the first time.
Automated Research Pipelines
We chain tools together into pipelines that run autonomously:
Competitive research pipeline: Agent-browser crawls → Claude Code structures → comparison matrix → gap analysis → brief
Specification pipeline: Codebase analysis → schema extraction → business rule documentation → visual spec generation
Prototype pipeline: Specification input → data model generation → page layout → component assembly → working prototype
Each pipeline produces documented, reproducible output. Run the same pipeline on the same inputs — you get the same results.
Spec-Driven vs Vibe Coding
The tooling only works because we're spec-driven. Without specs, AI tools produce inconsistent, undocumented output that drifts with every prompt.
Vibe coding:
- Prompt → code → prompt → code → prompt → code
- No documentation. No traceability. No reproducibility.
- The AI forgets context. Features conflict. The code drifts.
Spec-driven:
- Research → specification → generation
- Every decision documented. Every feature traceable to a requirement.
- Change the spec, regenerate the prototype. Same inputs, same outputs.
The spec is what makes the AI tools reliable. Without it, you're just hoping the AI remembers what you said three prompts ago.
The Stack
Our default tooling:
| Tool | Role | |------|------| | Claude Code | AI engine — specification, generation, decomposition | | Agent-browser | Automated web research and testing | | Custom skills | Reusable instructions for consistent output | | Next.js + shadcn/ui | Prototype and production stack | | Tailwind CSS | Styling | | Vercel | Deployment |
The stack is opinionated. Claude Code knows these tools well. Consistency in the stack means consistency in the output.
The following chapters cover each phase — Discovery, Specifications, and Production — in detail.