DoneThat

AI Adoption Guide

An interactive guide to help AI adoption in your team. Make AI implementations focused, not chaotic.

Ten years ago, in my first consulting job at the Boston Consulting Group, I was asked to build a framework for AI adoption. The technology looked very different then but the core challenge was the same. High uncertainty, a lot of promise, and very few proven playbooks.

What worked then still works now. Stay curious, run small experiments, measure honestly, and learn fast. This guide gives you a structured way to do exactly that, without the chaos of trying to do everything at once.

Let's start with some context

There is a lot of hype around AI. Let's ground your expectations.

Status Quo

  • Productivity gains from AI are real but uneven. They are concentrated in specific tasks, not whole jobs.
  • Everybody is trying things, few managed to institutionalize use and see large gains.
  • Call-center employment is still growing, software engineering market seems to be recovering.

Three Scenarios

  • AGI optimist (Altman and Kurzweil). General intelligence arrives within 5 to 10 years. Most white-collar work changes fundamentally.
  • Capability plateau (Marcus and LeCun). Current architectures hit a ceiling within 2 to 3 years. AI stays useful but narrow.
  • Economics reverse (Zitron). Inference and tooling costs outpace capability gains. AI becomes less economical over time.

How to navigate the uncertainty

No one knows which scenario plays out. These practices keep your options open regardless.

Best practices for uncertainty

  • Prefer reversible decisions until the signal is clear.
  • Make small bets in parallel rather than one large commitment.
  • Create buffers to allow acting once things become clearer.
  • Build evaluation muscles. The team that measures learns fastest.

Common pitfalls

  • Tech debt from rushed pilots nobody maintains and rushed AI-coded features.
  • Token and tool costs as measure of productivity.
  • Me-too adoption without clear owners or success criteria.
  • Believing LinkedIn influencers who promote their own AI businesses.

Our framework for AI adoption

Similar to frameworks like OODA (Observe, Orient, Decide, Act) or the lean startup's Build Measure Learn, we focus on learning fast in an environment of uncertainty.

Enable
Experiment
Evaluate
Execute
  • People: Enablement and change management.
  • Data: Accessibility for AIs, governance.
  • Risk: Security, spending budgets, etc.
  • Prioritize use cases by value and effort.
  • Keep pilots short and outputs reviewable.
  • Clear owners and expectations.
  • Agree on success metrics before the pilot starts.
  • Review against the original target, not vibes.
  • Share learnings openly and generously.
  • Only scale what has clear owners and proven value.
  • Build repeatable processes, not one-off heroes.
  • Keep the loop running. Execute feeds back into enable.

To start, think backwards

Start from the back. Your strategy should run in reverse.

Execute
Evaluate
Experiment
Enable
  • Execute doesn't matter yet - scale only after the signal exists.
  • Define what great success looks like before picking any tools.
  • Set targets you can actually measure and track over time.
  • Think back to the three AI scenarios. What does winning look like in each?
  • Choose use cases that would directly enable those targets.
  • Prioritize by value and effort. Pick small, reversible bets first.
  • Brainstorm more ideas by thinking along your own process steps.
  • Identify what you need to make the first experiments possible.
  • Close gaps in data access, budget, security, and ownership.
  • Don't start experiments you can't sustain for at least 8 weeks.
Evaluate

Define the target

What would great success look like?

Guiding questions

  • Think back to the three scenarios for AI development.
  • How would you know you successfully used AI in each case?
  • What would happen to your product, customers, processes?
  • How would you measure that?
OutcomeMetricOwner
Experiment

How to achieve the targets?

Select use cases that would enable your targets.

General

Drag any marker to adjust value and effort. Click to select.

Low effortHigh effort
High valueLow value
AI assistants
AI Projects
AI Skills
Data enrichment
Dictation AIs
MCP
Meeting transcription
Time tracking
Vibe scripting
Vibe-Coding Slides
Vibe-coding websites
Workflow automation

Selected

No use cases selected yet. Hover a dot for details, click to select.

Experiment

Start brainstorming

Think along your process to identify more potential use cases. Start thinking from pain points and bottlenecks in your existing process. Consider the goals you set in the beginning.

Open full Software page
discover
design
build
test
release
adopt
support
retire
Enable

Make it possible

Create the conditions to enable the use cases you just selected.

Data

Best practices

  • Map the 80/20 data sources that have most value.
  • Prioritize those needed for the use cases.
  • Check how AI can get access to them.
  • Decide who is responsible for deciding what's accessible.
  • Don't start a cleanup project unless it's really needed.
  • If deciding on new tools, prioritize API and MCP access.
TodoOwner

FinOps

Best practices

  • If possible give each employee a budget to experiment with.
  • Stop any privately paid subscriptions (business risks).
  • Track token cost, slowly start reviewing cost per output.
  • Set spend alerts before larger-scale rollouts.
  • Review cost per useful output.
TodoOwner

Security

Best practices

  • Switch from private accounts to company accounts.
  • Any tool that has access to your data and to the internet is a risk.
  • Very careful with skills, mcps, and other third-party add-ons.
  • Have a policty for what's allowed and what isn't.
  • Review best practices, this is evolving quickly.
TodoOwner

Responsibilities

Best practices

  • Assign an owner per use case.
  • Assign somebody who decides when to stop pilots.
  • Assign somebody who decides on security and data.
  • Assign somebody who decides on time and budget investments.
TodoOwner

Time

Best practices

  • Make explicit time available for experimentation.
  • Decide what work gets paused or reduced in favor of experimentation.
  • For example follow Google's 20% rule.
  • Frequently review as new tasks just start appearing.
TodoOwner

Expertise

Best practices

  • AI assistants can answer most questions.
  • Build expertise in-house, you will need it in the future.
  • Building. is the best way to learn, skip the courses.
  • Share best practices internally.
TodoOwner

Change Management

Best practices

  • Be clear with your strategy for AI adoption and how it will affect people.
  • Communicate often, repeat yourself.
  • Adress concerns.
  • Recruit champions close to the work.
  • Repeat what works across teams.
TodoOwner

Anything Else

Best practices

  • Add any other enablement gap.
TodoOwner

Send results by email.

We send them to you by email so you can share them with your team.