|Teardowns

How We Built Credaro in Three Weeks

A property accounting SaaS from zero to production in 21 days. Here's every decision we made and why.

Credaro is a property accounting platform. Three weeks ago it didn't exist. Now it handles real transactions for real property managers.

This is the full teardown — not the highlight reel.

Day 1-3: Scoping with the constraint

The client came with a spreadsheet problem. Property managers tracking rent, expenses, and owner distributions across dozens of properties — all in Excel.

We didn't start with a PRD. We started with their messiest spreadsheet and asked: what breaks first when you add a new property?

The answer shaped the entire data model.

The stack decision

Next.js with server actions. Postgres on Neon. Tailwind and shadcn/ui for the interface. Deployed on Vercel.

We didn't debate this. This is our default stack — the one where AI tools are most productive. Every hour spent choosing tools is an hour not spent building.

Week 1: The core loop

Property → Unit → Lease → Transaction. That's the data model. Four tables that handle 90% of what property managers do daily.

We used Claude Code to scaffold the entire CRUD layer. Models, API routes, validation, basic UI — generated in a day, then two days of editing it into shape.

The AI got us to 70% fast. The remaining 30% was the hard part: edge cases in lease date calculations, partial month proration, handling security deposits correctly.

Week 2: The money part

Accounting is where most property management tools fail. They treat it as an afterthought.

We built the ledger system first-principles: every transaction is a double entry. No exceptions, no shortcuts. This cost us two extra days upfront but saved us from a rewrite later.

The owner distribution calculation — splitting income and expenses across property owners by their ownership percentage — was the hardest single feature. Cursor was helpful here because we needed to see the math inline while editing.

Week 3: Polish and production

The last week was all about making it feel right. Loading states, error handling, empty states, and the small interactions that make software feel finished versus demo-quality.

We shipped to three beta users on day 19. Found two critical bugs on day 20. Fixed and redeployed on day 21.

What AI did and didn't do

AI wrote roughly 60% of the initial code. We rewrote about half of that. So AI contributed maybe 30% of the final codebase directly.

But that math misses the point. AI's real value was velocity through the boring parts. Scaffolding, boilerplate, test setup, deployment config — the stuff that's not hard but takes time. That time went into the hard problems instead.

The honest numbers

  • 21 calendar days, 2 developers
  • ~15,000 lines of code shipped
  • 4 database tables, 23 API endpoints
  • Zero frameworks we hadn't used before

Three weeks isn't magic. It's what happens when you eliminate decision fatigue and let AI handle the mechanical work.