Stop Counting Dashboards. Data Team ROI in an AI First World.
The uncomfortable ROI question that's coming for data teams everywhere.
LinkedIn and X might have you convinced that everyone is running their lives on OpenClaw. That’s not the case, though I did hear someone at the gym in SF this morning talk about their stock of Mac minis. What is true is that we are seeing change at a pace we’ve never dealt with before, and it is disruptive, scary, and foundation-shaking. This includes everyone in the data world.
Why is our comfort zone getting so disrupted? Most data teams measure their value by what they produce. Dashboards built. Models deployed. Queries answered. Experiments run. Another way to summarize this is they measure by throughput. More throughput means more productivity, which means more value. Right?
In an AI-first world, the throughput is the easy part. The human element of every one of those metrics is going to zero. Put another way, we can’t attribute our value to throughput any longer. Don’t believe me?
Here’s what I’ve seen firsthand, and we’re only in the early stages. AI can build dashboards in minutes. It can write SQL faster than your best analyst. It can generate a model, run an analysis, and produce a chart before you’d have a chance to finish a back-and-forth Slack thread with someone. “Look what we built” is no longer a viable conversation.
Here’s my hot take: The uncomfortable truth is that we probably always measured ROI for data incorrectly. We went with what felt comfortable. We just didn’t notice because the artifacts took long enough to produce that they felt valuable. After all, high throughput from smart and capable people must be valuable, right?
I think we got it wrong. I know I did. What AI has started to do (and will do more of) is reduce production and throughput cost. It will make it easier to see that most of the artifacts, no matter how sophisticated and fancy they were, didn’t and don’t change decisions.
Does that mean data teams aren’t valuable? Of course not! We’ve just been measuring the wrong thing, and because of that, we’ve managed toward those outcomes. It is no surprise we’ve achieved them. I think that the value of a data team in a world where the production layer is automated is best captured by 3 things:
Decision Velocity
What I mean: How fast does the company go from question to decision and action? Not from question to response. Not from response to another Slack thread. Real action.
Tracking this metric is hard. Not only technically, but also because doing so is uncomfortable. The standard workflow looks like this: someone asks a question; the data team conducts an analysis; the analysis is presented in a meeting; the meeting ends with follow-ups; another analysis is requested; and three weeks later (if we’re lucky), a decision is made.
Decision velocity measures the total time from “we need to know X” to “we decided Y and executed on it.” The data team’s job is to compress that. Or at least I think it should be our job!
Here’s what makes the scientist in me uncomfortable. This means the data team needs to stop optimizing for thoroughness and start optimizing for speed-to-decision. An 80% answer today is almost always more valuable than a 95% answer next week. AI makes this easier because it can produce the 80% answer in hours (maybe minutes). But someone still needs to know which questions matter, frame them correctly, and push the organization to actually make a decision. That’s the data team’s real job.
Experiment Yield
What I mean: What percentage of experiments actually produce a clear ship/no-ship decision?
Most companies that run experiments talk about wins and losses. The truth is that a huge percentage of their experiments are inconclusive. The experiments run for weeks or months, produce ambiguous results, and end with someone making a gut call anyway. All the experiment did was delay the decision. It didn’t change it at all.
Experiment yield is the percentage of experiments that end with a confident, final decision. “We learned something interesting” doesn’t count. “The results were directionally positive” might sound nice, but it doesn’t work either. I mean a real decision that includes: a ship decision (we’re shipping this, or we’re not), and here’s exactly why.
What does the experiment yield tell us? A few things. First, we’re spending time on hypotheses that are too small to detect (the experiment was underpowered from the start). Second, we’re measuring the wrong metrics (the primary metric doesn’t capture the real impact). Third, we’re running experiments on things that don’t matter (nobody was going to change their plan regardless of the result).
Here’s an example. Take your last 20 experiments. For each one, ask: did this experiment directly lead to a ship or no-ship decision within one week of results? Count the ones that did. Divide by 20. If your yield is below 70%, you’re wasting experiment capacity.
Revenue Affected
What I mean: What incremental revenue can you trace to a decision that wouldn’t have happened otherwise?
You might want to throw things at me for proposing this. It is controversial and hard to measure. Yet it is also the one most data teams avoid because it requires taking a strong position on causality. We have to be willing to say: “This revenue happened because we did X, and it would not have happened if we hadn’t.” That is deeply uncomfortable inside an organization.
It is much more comfortable to hide behind influence. “We provided insights that informed the product roadmap.” That is proximity, not causality. You can draw a line from a specific data-driven action to a specific revenue outcome.
Not everything has to be perfect here. But every quarter, your data team should be able to point to at least three decisions in which data directly drove a revenue outcome. Some examples include: a pricing change that increased ARPU, an experiment that improved conversion, a model that reduced churn, a segmentation that unlocked a new market.
Here’s how to start. Build a “data-attributed revenue” ledger. Every quarter, the data team documents: what decision was made, what data informed it, what the estimated revenue impact was, and how confident you are in the causal claim. You don’t need to publish this to the whole company. I understand the politics it can produce. But at a minimum, senior leadership and the team itself must be aligned on the value the team creates.
Is Three Metrics Enough?
The temptation is to add more metrics. Dashboard usage. NPS from stakeholders. Model accuracy. Data quality scores. Those are all operational metrics. They measure whether the data team is running well, not whether it’s creating value.
Decision velocity, experiment yield, and revenue-affected measure what actually matters: is the business making better decisions faster because this team exists?
In the AI era, that question comes into sharper focus. AI can produce the analysis. AI can build the dashboard. AI can even run the experiment. What AI can’t do is decide which questions matter, connect the answers to business outcomes, and take accountability for the result. That’s what the data team does.
Does this mean we haven’t done these things? No. It just means that part of adapting to an AI-first era means our measurement framework must adapt as much as the work we do does.

Two more I use a lot that aren’t vanity metrics like dashboard usage:
1) Cost reduced
2) Future cost avoidance
They’re not very important for growing companies where revenue growth will be the most important commercial metric, but super relevant to more mature / stable enterprises that are going through a cost-cutting focus
#2 is also relevant to fast-growing orgs where some costs are currently quite low, but can easily spiral out of control without pre-emptive measures (which will feel thankless if not reported on properly)
This line was huge:
"What I mean: What incremental revenue can you trace to a decision that wouldn’t have happened otherwise?"