How to scope an AI deployment in two weeks
A practical, two-week blueprint for scoping an enterprise AI programme - from opportunity mapping to a signed scorecard. No workshops, no theatre.
Most enterprise AI programmes start the wrong way. A vendor-led workshop produces a slide full of "use cases", leadership picks the two that sound most impressive, and an engineering team inherits a brief that was never properly scoped. Six months later the programme is quietly rolled into a phase two that never arrives.
Scoping is not a workshop. It is a two-week structured exercise with one question at the centre: which AI bets, sequenced in what order, are most likely to produce a measurable business outcome within the next three quarters? Everything else is downstream of that.
Why two weeks is the right length
A day is too short - you can't walk the data layer, talk to operators, and produce a ranked roadmap in eight hours. A quarter is too long - leadership loses attention and the opportunity set changes. Two weeks lets you interview widely, analyse properly, and still deliver a scorecard before the programme goes stale.
A good two-week scope produces four things:
- A ranked shortlist of AI bets, each with estimated payback and risk.
- An honest list of bets to refuse this year (usually the more important list).
- A written scorecard the programme can be held to.
- A sequenced roadmap with owners, not just priorities.
If your two-week scope doesn't produce all four, it has been a theatrical exercise and the programme will drift.
Week one: the ground truth
Week one is about evidence gathering. Not workshops, not ideation - evidence.
- Five to ten interviews across leadership, product, engineering, operations and a finance partner. Ask what is broken, not what is aspirational.
- A tour of the data layer with whoever runs it. Where does the data live, who writes to it, who can be trusted, and where is the silent duplication?
- A review of past AI attempts, pilots included. What worked, what quietly failed, and why. This is where most of the real insight lives.
- A look at the competitive frame - what are peers shipping in production, not just announcing.
By the end of week one you should have a draft long-list of 15–25 candidate bets and a view of which ones the organisation is plausibly ready to deliver.
Week two: scoring and sequencing
Week two is quantitative. Every candidate bet is scored against four axes:
- Payback: credible business outcome within two quarters of ship.
- Feasibility: data readiness, dependency risk, team capacity.
- Risk: regulatory, reputational, operational.
- Sequencing: does this bet unlock later bets, or does it stand alone?
The scoring doesn't need to be precise. It needs to be comparable across bets. A scored shortlist of ten bets is infinitely more useful than a prose document that describes fifty.
Once the shortlist is scored, the roadmap falls out. Typically two to three bets survive to the first wave. They get real architecture sketches and owners. Everything else moves to a watchlist with a review date.
The scorecard: the artifact that matters most
A programme that cannot be measured will not be defended in the next budget round. The scorecard is the single most important artifact a scoping exercise produces, and it should be agreed before any build begins.
A good scorecard includes:
- The business outcome for each bet, stated as a measurable metric.
- A baseline and a target, with a review cadence.
- A cost envelope per bet.
- A named owner - not a committee.
- A trigger to stop or rescope, in writing.
We insist on a written stop condition because it is the clearest indicator that a programme is being scoped honestly. If no one can articulate the conditions under which the programme would be rescoped, the programme is not being managed - it is being hoped for.
Common failure modes
Three patterns we see over and over:
The vendor-led scope. The vendor runs the workshop and discovers, by coincidence, that the most promising use case is the one their platform is best at. Avoid. A scoping engagement should be vendor-neutral by default.
The "all-AI" scope. Every candidate bet gets tagged as an AI problem. In practice, at least a third of the real opportunities are workflow automation, integration or data-platform problems that don't need a model at all. A good scope names those honestly.
The priorities-without-owners scope. Ranked list, no named owners. Roadmap drifts. Budget quietly reallocates. Nothing ships.
Related reading
Frequently asked
How long should an AI scoping engagement take? Two weeks is the right length for most mid-to-large enterprises. Shorter scopes under-analyse the data layer; longer ones lose leadership attention.
Should scoping be run by a vendor? It can be, provided the vendor is contractually vendor-neutral for the scope itself. A scope run by a platform vendor will, on average, recommend that vendor's platform.
What should an AI scope deliver? A ranked shortlist, a written scorecard, a sequenced roadmap with owners, and an honest list of bets to refuse. If any of those four are missing, the scope is incomplete.
How much does an AI scoping engagement cost? Typically between $20,000 and $75,000 depending on the size of the organisation, the number of stakeholders, and the depth of data-layer review required.
If you want a vendor-neutral scope of your AI programme, our AI strategy service is a two-week engagement designed around these principles. We leave with a scorecard; the programme can be held to it.