Confidential · 2025
A change that has already happened

Build products at the
speed of thought.

Millions of people are trying to build software products with AI. They can prototype in minutes, but they can't ship. Lovable proved the demand: $400M ARR, 15M users, 200K projects created daily. But 60–70% is as far as most get. There is no user-friendly way to build scalable products with AI. That's about to change.

Lovable: $400M ARR · $6.6B valuation
Prototyping solved. Production isn't.
01. The Change
The constraint has moved

Writing code is no longer the bottleneck.
Knowing what to build is.

For the past twenty years, the entire software industry organized itself around one constraint: the person who could write code. That constraint shaped every team structure, every process, every tool.

The constraint is gone. And the industry has about 18 months to reorganize before the teams that figured it out first have an insurmountable lead.

The old world
Writing code = scarce, expensive, slow.
Everything organized around execution.
The new world
Writing code = commodity.
The constraint moves upstream to thinking.
"AI-generated code gets you 60–70% of the way. The last 30%, architecture, quality, production-readiness, is where every project fails or succeeds."
. Consensus from 15M+ Lovable users, 2025–2026
02. Stakes
There will be winners and losers

A threshold is approaching.
The window to cross it is closing.

Below the threshold
Old process, new tools.
Same outcomes.
  • Use AI to generate code faster, without knowing what to build
  • Ship prototypes that hit the 60–70% wall, then stall
  • Burn AI credits building features nobody validated
  • No feedback loop. Ship, forget, repeat
Above the threshold
New process. AI at the center.
Human in the driving seat.
  • Tight iteration cycles between discovery, definition, execution, and measurement
  • AI handles the building. Humans make the key decisions
  • Teams of 2–3 move 100× faster than teams that didn't adapt
  • Every cycle closes the loop and makes the next one smarter

The difference isn't talent or budget. It's whether you have a system that connects what users need to what gets built.

03. The Evidence
The entire industry organized around a constraint that no longer exists

50% of every team's time goes to the step that determines 11% of outcomes.

Because execution was the limiting factor for so long, squads have been disproportionately built around engineering. Now that code is becoming a commodity, there's a growing discrepancy between how much time teams spend on execution and how much it actually determines outcomes.

Squad time allocation vs step importance · 16-person team
Step importance Productive hours Coordination overhead

Based on modeled squad: 1 PM · 2 designers · 8 SWEs · 1 DS · 2 QA · 1 DevOps · 1 Analyst. full methodology in appendix report

4.6×
Execute overallocated vs its importance
56%
Of outcomes determined before code is written
23%
Of squad time spent on those steps
04. The Promised Land
Welcome to the age of the product builder

Hybrid teams, humans and AI agents, where humans set the direction and AI closes every loop.

Today, people spend most of their time moving information between tools and re-aligning teams. Discovery, definition, and design are fragmented across people and platforms. Subject to subjectivity, lost context, and broken handoffs. AI can handle the information flow, the reflection, and the execution. Humans stay in the driving seat for the decisions that matter.

Information flow
AI channels context between every step
No more copying insights from meetings into docs, docs into tickets, tickets into code. One system holds the full context and moves it forward automatically.
Human in the driver's seat
You set the destination. AI takes the turns.
Like an autonomous vehicle: you decide where to go. AI navigates the route, handles the complexity. When it hits something ambiguous, it asks you how to proceed.
Closed loops
Discovery to measurement in one system
Every step feeds the next. What users said informs what gets built. What gets built is measured against what was intended. The process compounds, removing subjectivity from each cycle.
05. The Obstacles
But three barriers stand in the way

The promised land is real. Getting there is not obvious.

Obstacle 01. The Handoff
Discovery and execution are either slow or misaligned
If connected, there's constant back-and-forth between product and engineering: alignment meetings, spec reviews, clarifications. If disconnected, what gets built drifts from what users actually need. Either way, the handoff between "what should we build" and "what gets built" is where most teams lose.
Obstacle 02. Scaling
Hypothesis testing doesn't scale to production
You cut corners to validate quickly: simplified flows, skipped edge cases, deferred robustness. But there's no system to track what was cut and circle back. The prototype becomes the product by accident. And for product builders who aren't engineers, there's no way to stay in the driving seat. You can't evaluate what corners were cut or direct what to revisit.
Obstacle 03. The Loop
Nothing closes. At any level.
Features ship but users who requested them never know. Corners cut to validate hypotheses never get revisited. Flows removed from the spec to test core ideas never get added back. There's no mechanism connecting what was built to what worked, what was deferred to what needs attention, or what users asked for to what they received.

These aren't feature gaps. They're structural failures in how products get built. Fixing them requires a new kind of workspace, not a better code editor.

06. The Product
We asked ourselves: what if the entire path, from user need to shipped feature, lived in one system?

That's why we built Tempo.

Eliminates the handoff
Capture

Tempo joins your meetings, captures every user need, and structures it into prioritized specs that flow directly into execution. No handoff, no re-interpretation, no drift between what users said and what gets built.

Meeting bot + transcription + AI analysis + spec generation
Scales hypothesis to production
Build

AI agents execute with structured review gates. The product builder approves key decisions without reading code. The system tracks every shortcut taken and surfaces what needs to be revisited to reach production quality.

Parallel AI agents + review gates + technical debt tracking
Closes every loop
Learn

Users who requested features are notified when they ship. Corners that were cut get surfaced for revisiting. Deferred flows get queued back in. Every cycle feeds the next. The process compounds instead of resetting.

Feedback loops + debt tracking + memory system

Working product. Both discovery (meeting capture → specs) and execution (AI agents → production code) are live and demo-ready.

07. Positioning
Lovable created a graduating class with nowhere to go

Tempo is where they graduate to.

Lovable / Bolt: fast prototypes, hit wall at 70% Cursor / Claude Code: powerful, requires engineers
Tempo
Production-grade AI execution, accessible to product builders, not just engineers
🎨
Lovable / Bolt
Gets you to 70%

Fast prototypes. Great for validation. But: no architecture, no quality gates, 45% security flaws. Can't scale to production.

Tempo
Gets you to shipped

Meeting → spec → parallel AI execution → review gate → production deploy. Full lifecycle. No code shown. Product builders stay in control.

🔧
Cursor / Claude Code
Requires an engineer

Best-in-class AI-assisted coding. But you need to read code, make architecture decisions, manage git. Built for engineers.

08. The Map
Three stages of product automation

The market is moving through these stages right now. The window to own each one closes fast.

Stage 1. Now
AI generates, humans fix

Foundation models generate code. Humans debug, architect, and deploy. Lovable, Bolt, Cursor. Gets you to 70%.

Powered by: general-purpose LLMs
Stage 2. 18 months
AI executes, humans direct

Structured AI agents execute with review gates and verification. Humans define intent, evaluate quality, close the loop. Production-grade output.

Tempo enters here. This is the window.
Stage 3. 3–5 years
AI builds autonomously from specs

Proprietary models trained on product intent generate complete implementations. Fine-tuned per company, per workflow. The spec is the product.

Powered by: vertically fine-tuned models (Tempo's moat)

The teams that enter Stage 2 now will own the data to build Stage 3. The teams that stay in Stage 1 will be buying that capability from someone else, or they won't exist.

09. The Moat
Two compounding advantages

Every user interaction makes Tempo harder to replicate.

Moat 1. Proprietary Model: Spec-Grounded Verification Loop
Generator

Fine-tuned open-source model takes product spec + codebase context and produces candidate implementations.

Verifier

Separate model scores each implementation against the original spec: correctness, security, architectural coherence.

RLHF Loop

Every user approval or rejection is a training signal. The model learns what "good" means for product builders specifically.

Moat 2. Token Broker Platform

Users don't manage AI credentials. Tempo optimizes model selection per task, negotiates token pricing at scale. SaaS subscription today → usage-based token broker as the platform grows.

The Flywheel
User builds
Spec → code → review
Model learns
RLHF from approvals
Quality rises
Better first-pass code
More users
Lower cost, better output

Only Tempo has the full loop: intent → code → human judgment.

10. Market
The graduating class

Lovable proved the demand. Tempo captures the next stage.

15M+
Lovable daily active users
200K new projects created daily. Most are prototypes that never ship.
$400M
Lovable ARR (March 2026)
2× in 3 months. $6.6B valuation. Proves product builders want to build with AI.
70%
The ceiling
Consistent across reviews: AI builders get you 60–70% of the way. The last 30% is the product.
Why now
  • LLM code generation just crossed the production-viability threshold
  • Lovable proved non-engineers will build, but they hit the quality ceiling
  • The "product builder" identity is emerging. No tool serves them yet
  • Open-source models (Kimi, DeepSeek) make fine-tuning economically viable
Day-1 user
  • Product managers and technical PMs, people who understand products deeply but aren't engineers
  • Currently using Lovable/Bolt for prototyping, hitting the wall, hiring engineers to finish
  • Or managing engineering teams and spending 28% of time on coordination overhead
11. Evidence
Evidence the story can come true

I've been circling this problem for a decade.

AN
Antoine Neidecker, Founder
ML Engineer (3 years) · Head of Product (2 years) · 3× founder
mtg.ai
Meeting summarization startup. Literally the discovery feature reborn in Tempo
Train Fitness
Wearable startup, still growing. Proved ability to ship hardware + software product
Crypto startup (Head of Product)
Transaction order flow, still growing. Bridged product thinking and technical execution
Why this founder for this company
  • ML engineer building an AI product. Can design and build the verification loop and fine-tuned model in-house
  • Head of Product who lived the execution gap. Understands why product managers need this tool, not just engineers
  • mtg.ai founder. Already built the meeting-to-insight pipeline once. The discovery feature isn't new territory; it's a returning domain
  • 3× founder with two companies still growing. Pattern of building things that last
Product status
  • Discovery: live. meeting bot + transcription + AI analysis + feature prioritization
  • Execution: live. parallel AI agents, structured task lifecycle, review gates, verification
  • Next: connecting discovery to execution end-to-end, then building the proprietary model
Currently building design partner cohort of 10–20 product builders. Demo-ready today.
12. The Ask
The round

$8M seed to build the model and capture the graduating class.

The transition from human labor to human thinking is happening whether or not there's a tool built for it. We're building the tool, and the proprietary model that makes it defensible.

Use of funds
  • Model development: fine-tuning infrastructure, RLHF pipeline, verification model training
  • Market acquisition: design partner expansion, community of product builders, GTM
  • Team: grow to 10: ML engineers, product, growth
18-month milestones → Series A ready
  • Tens of thousands of users on the full discovery → execution workflow
  • Proprietary model live, trained on real user approval/rejection signals
  • Token broker economics proven: usage-based revenue alongside subscriptions
  • Evidence of compounding: users' second cycle is measurably better than their first
Where we are now
  • Working product. Discovery and execution both live
  • Building design partner cohort of 10–20
  • Model architecture designed: generator + verifier + RLHF loop
  • Taking early conversations now. Happy to circle back with design partner data.

Every product team will be reorganized around AI in the next two years.
Tempo is how they get there.

antoine@tempo.diy · tempo.diy

Contact
antoine@tempo.diy
Website
tempo.diy
Appendix
Full report →