Working Prototype in 2 Weeks: How We Build AI Products
Most AI projects stall before they produce anything useful. A 2024 survey by RAND Corporation found that 80% of enterprise AI projects fail to move beyond the pilot stage [Source: RAND Corporation, “Identifying and Mitigating the Causes of AI Project Failure,” 2024]. The typical pattern is familiar: months of planning, a bloated requirements document, a proof-of-concept that never reaches production.
There is a different approach. It involves compressing the cycle from idea to working software into two weeks. Not two weeks of planning. Two weeks that end with a deployed application running on real data that real people can use.
This guide explains exactly how that process works, step by step. It is the same process used to build Kova (an AI-powered vehicle recycling platform) and several other production systems for UK businesses.
The 4-Step Process
Every project follows four steps: Understand, Architect, Build, and Validate. These are not phases that run sequentially over months. They overlap and compress into a tight feedback loop across two weeks.
Understand
Map the business problem, existing workflows, and data landscape. Identify where AI adds genuine value versus where a simpler solution suffices.
Architect
Choose the models, integrations, and data structures. Make the hard technical decisions before writing code, not during.
Build
Write the code, integrate the APIs, deploy the infrastructure. Daily progress — not weekly status meetings.
Validate
Put the prototype in front of real users, measure against success criteria, and identify what to refine next.
Week 1: Discovery, Data Audit, and Architecture
Days 1-2: Discovery Session
The process begins with a 90-minute working session. This is not a sales call. It is a structured conversation that covers three things: what the business actually does day to day, where time and money are wasted, and what data already exists. Most businesses have more usable data than they realise — spreadsheets, email histories, CRM records, and operational logs all contain patterns that AI can work with.
The output of this session is a one-page brief that defines the problem, the success criteria, and the constraints. According to McKinsey, companies that define clear objectives before starting AI projects are 2.5 times more likely to achieve their goals [Source: McKinsey & Company, “The State of AI in 2024,” 2024].
Days 2-3: Data Audit
Most AI failures stem from data problems, not model problems. The data audit examines what data exists, where it lives, how clean it is, and what gaps need filling. This involves looking at actual databases, spreadsheets, and APIs — not asking someone to describe them.
Common findings include: data spread across five different spreadsheets with inconsistent formatting, valuable information locked inside email threads, and manual processes that generate no data at all. Each of these has a practical solution that can be implemented within the two-week window.
Days 3-5: Architecture Decisions
With the problem and data understood, the technical architecture is defined. This covers model selection (which LLM or ML approach fits the task), data pipeline design (how information flows in and out), integration points (how the new system connects to existing tools), and deployment infrastructure.
Architecture decisions are documented but kept lightweight — a system diagram, a database schema, and a list of API endpoints. Gartner reports that organisations spending more than 30% of project time on documentation before building are 40% less likely to ship on time [Source: Gartner, “Agile AI Development Practices,” 2025].
By the end of week one, the database is usually live, the project is deployed to a staging environment, and basic scaffolding is in place. The first lines of production code are written on day three or four, not day thirty.
Week 2: Build, Test, and Iterate
Days 6-8: Core Build
Week two is focused on building. The core functionality — the thing that solves the actual business problem — is built first. If the project is a pricing engine, it prices vehicles by day seven. If it is a document processor, it processes real documents by day seven. Everything else (user management, polish, edge cases) comes after the core works.
This is where working as a solo builder rather than a team matters. There are no stand-ups, no pull request queues, no waiting for a designer to finish mockups. A single person who understands the full stack can make decisions and ship changes in minutes, not days. Research from the Standish Group indicates that small teams (1-3 people) complete software projects on time 3 times more often than large teams (10+) [Source: Standish Group, CHAOS Report, 2020].
Days 8-9: Testing with Real Data
The prototype is tested with actual data from the client's business, not synthetic test data. This step reveals problems that no amount of planning uncovers: edge cases in the data, assumptions that were wrong, and features that seemed important but are not. A midway check-in with the client ensures the build is tracking against expectations.
Corrections happen immediately. If a pricing formula produces incorrect results for a specific vehicle category, it is fixed that afternoon. This tight feedback loop is only possible because the same person who wrote the code is sitting in the testing session.
Days 9-10: Polish, Deploy, and Hand Over
The final days focus on deployment stability, error handling, and a clean handover. The prototype is deployed to production infrastructure with proper logging and monitoring. A walkthrough session covers how to use the system, what it does well, and what its current limitations are.
Honesty about limitations matters. A prototype that prices 80% of vehicles accurately and flags the remaining 20% for manual review is more useful than one that claims 100% accuracy but silently gets 15% wrong.
What You Get at the End
The deliverable is working software. Specifically:
- A deployed web application accessible from any browser
- A database with your data structured and queryable
- AI integrations connected to real APIs with real outputs
- Source code in a Git repository that you own
- Documentation covering how the system works and how to extend it
- A clear list of what to build next, prioritised by impact
This is not a proof of concept destined for a slide deck. It is software that people in the business can start using on day eleven. According to Harvard Business Review, organisations that deploy AI prototypes into real workflows within 30 days are 70% more likely to achieve ROI within the first year [Source: Harvard Business Review, “Why AI Projects Succeed,” 2024].
How This Differs from Agency Proof-of-Concepts
The traditional agency model follows a different trajectory. A typical agency engagement begins with a 2-4 week scoping phase, followed by a proposal, followed by a 6-12 week build. The proof-of-concept, when it arrives, often runs on synthetic data in a demo environment. It proves the technology works in theory. It does not prove it works for your business.
The structural problem is the handoff. An agency employs separate roles for sales, project management, design, front-end development, back-end development, and DevOps. Each handoff introduces delay and information loss. By the time the developer writing the code understands the business problem, weeks have passed.
UK digital agencies charge a median day rate of £850-£1,200 per person [Source: Hired, “UK Tech Salary & Rates Report,” 2025]. A team of four over eight weeks costs between £27,200 and £38,400 before the software reaches production. The two-week prototype model costs a fraction of that because it removes the team overhead entirely.
There is a trade-off. A solo builder cannot produce the same volume of code as a team. But for an AI prototype — where the goal is to validate an idea with working software — volume is not the bottleneck. Clarity, speed, and tight feedback loops matter more.
Frequently Asked Questions
What do I actually receive at the end of the two weeks?
You receive a deployed, working application — not slides, wireframes, or a report. It runs on real infrastructure with real data. You can log in, use it, and share it with colleagues. The code is yours, hosted in your own repository.
Do I need to prepare anything before we start?
You need to be available for a 90-minute discovery session in the first two days and a 30-minute check-in midway through. If your project involves existing data (spreadsheets, databases, APIs), having access credentials ready saves time. Beyond that, no preparation is required.
What happens if the prototype needs changes after delivery?
Changes are expected. The prototype is built to be iterable. After the initial two-week build, most clients move into a monthly retainer where features are added and refined based on real usage. The architecture is designed from day one to support this.
How is this different from what an agency delivers?
Agencies typically spend 4-6 weeks on discovery alone before writing any code. Their deliverable at week two is usually a statement of work or design mockups. This process delivers working software in the same timeframe because the builder is also the architect — there is no handoff between teams.
What technology stack do the prototypes use?
Most prototypes are built with Next.js, TypeScript, and Tailwind CSS on the front end, with PostgreSQL for data storage and Vercel for hosting. AI components use whichever model fits the task — typically Claude or GPT-4o. The stack is chosen for speed of iteration, not novelty.
Ready to build your prototype?
Book a free discovery call. We will discuss your problem, assess whether a two-week prototype is the right approach, and outline what the build would look like.
Book a Discovery Call