AI-powered production

Production is where most AI-assisted projects fail. The vibe-coded prototype works, so developers try to refactor it into production. The code resists. The architecture collapses. They end up rewriting everything anyway.

Productamp takes a different approach: Extract specs from the prototype, then build production from scratch using a default stack.

The Shift in Mindset

Prototype phase: "Make it work" Production phase: "Make it right"

You're not coding anymore. You're specifying what to build, then using AI to generate the implementation.

Step 1: Extract Specifications

The frozen prototype is your source of truth. Extract three types of specs:

1. Data Structure (SQL Schema)

Map every entity in the prototype to a database table.

From prototype:

  • Properties list with name, address, units
  • Units with number, rent amount, status
  • Tenants with name, email, lease dates

To SQL schema:

CREATE TABLE properties (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  name TEXT NOT NULL,
  address TEXT NOT NULL,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE units (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  property_id UUID REFERENCES properties(id),
  unit_number TEXT NOT NULL,
  rent_amount DECIMAL(10,2),
  status TEXT CHECK (status IN ('vacant', 'occupied', 'maintenance')),
  created_at TIMESTAMPTZ DEFAULT NOW()
);

Include:

  • Foreign key relationships
  • Constraints (unique, not null, check)
  • Indexes for common queries
  • Default values

2. Business Rules (Markdown)

Document the logic that governs the application.

Example business rules:

  • "When a unit is marked as occupied, its status cannot be changed until the lease end date"
  • "Monthly rent is calculated as unit rent amount × occupancy rate × days in month"
  • "Property managers can only see properties they own or manage"

3. Test Specifications

Define expected behavior for each user flow.

From prototype interaction:

  • User creates property → Property appears in list
  • User tries to delete property with units → Error message shown
  • User filters by status → Only matching properties shown

To test spec:

GIVEN a property manager is logged in
WHEN they create a new property with name "Oak Apartments"
THEN the property appears in their property list
AND the property has status "active"
AND the created_at timestamp is set to current time

Step 2: Choose Your Stack

Default Stack (Recommended):

  • Database: Supabase (PostgreSQL + Auth + Real-time)
  • Framework: Next.js 14+ (App Router)
  • Admin/CRUD: Refine.dev
  • UI: Tailwind CSS + shadcn/ui
  • Deployment: Vercel

Why this stack:

  • AI tools know these frameworks well
  • Supabase handles auth, database, storage, real-time out of the box
  • Refine provides CRUD scaffolding that AI can extend
  • Strong conventions reduce decision fatigue

When to deviate:

  • Your team has deep expertise in a different stack
  • Client requires specific technology (e.g., must use AWS)
  • Product has unique requirements (e.g., real-time gaming needs different infra)

Step 3: Build with AI

With specs and stack defined, building is mechanical.

Initial Setup

  1. Create Supabase project

    • Run SQL schema
    • Enable Row Level Security (RLS)
    • Configure auth providers
  2. Initialize Next.js + Refine

    npx create-refine-app@latest
    # Choose: Next.js, Supabase, Tailwind
    
  3. Set up environment

    • Supabase credentials in .env.local
    • Deployment pipeline on Vercel

AI-Assisted Development

For each feature:

  1. Provide context to AI:

    I'm building a property management feature.
    
    Data structure: [SQL schema]
    Business rules: [Markdown spec]
    Stack: Next.js App Router, Supabase, Refine, shadcn/ui
    
    Task: Create a property list page that shows all properties
    for the logged-in user, with filters by status.
    
  2. Review generated code

    • Does it match the spec?
    • Are security rules applied (RLS)?
    • Are error states handled?
  3. Iterate

    • Point out spec deviations
    • Request refactoring if code quality is low
    • Add edge case handling

Quality Gates

Before shipping each feature:

  • [ ] Matches prototype behavior
  • [ ] Passes test specifications
  • [ ] RLS policies prevent unauthorized access
  • [ ] Error states have clear messaging
  • [ ] Loading states prevent layout shift
  • [ ] Mobile responsive
  • [ ] Dark mode supported (if applicable)

Step 4: Test Against Prototype

The prototype is your regression test.

For each user flow:

  1. Perform the action in the prototype
  2. Perform the same action in production
  3. Compare results

They should match exactly. If production behaves differently, it's either:

  • A bug (production is wrong)
  • A missed requirement (spec was incomplete)
  • An intentional improvement (document why it differs)

Common Pitfalls

Pitfall 1: Skipping RLS

Supabase Row Level Security is not optional. Without it, users can access each other's data.

AI often generates code without RLS. You must add policies manually:

-- Properties: Users can only see their own
CREATE POLICY "Users can view own properties"
  ON properties FOR SELECT
  USING (auth.uid() = user_id);

Pitfall 2: Trusting Generated Tests

AI-generated tests often pass trivially (testing that 2 + 2 = 4 instead of real logic).

Write critical tests yourself. Use AI to speed up test writing, not replace thinking.

Pitfall 3: Over-engineering

AI loves to add abstraction layers. Keep it simple:

  • Don't create a service layer for every database table
  • Don't build a custom auth system when Supabase Auth works
  • Don't optimize until you have performance problems

Timeline

Week 1:

  • Set up infrastructure (Supabase, Next.js, Refine)
  • Build data access layer (database queries, RLS policies)
  • Create first CRUD feature

Week 2:

  • Build remaining features following specs
  • Implement business rules
  • Add error handling and edge cases

Week 3:

  • Testing against prototype
  • Bug fixes
  • Deployment and monitoring setup

Total: 3-4 weeks from spec extraction to production launch.

For solo developers, this is achievable. For teams, it's faster.

Tools

  • Claude Code / Cursor - AI coding assistants
  • Supabase - Database and backend services
  • Refine - Admin panel and CRUD framework
  • Vercel - Hosting and deployment
  • Sentry - Error tracking
  • PostHog - Product analytics

The Output

At the end of production, you have:

  • A deployed SaaS application
  • Clean, maintainable code
  • Proper security (auth, RLS)
  • Monitoring and error tracking
  • Documentation of business rules

The prototype is discarded. Production is now the source of truth.

Iteration

When you need to add new features:

  1. Prototype the feature (in isolation or integrated with prod)
  2. Freeze the prototype
  3. Extract specs
  4. Build production version
  5. Deploy

The cycle repeats. You're not maintaining prototype code. You're using prototypes as specifications for production features.

This is how you build and scale with AI.