DC Index168.91+3.22%OpinionA WHOOPing Masters from Rory McIlroy
dollar·commerce
Platforms

Why expensive MMM tools like Haus might be the next SaaS category in marketing to be axed

Andrew Watson·March 17, 2026
Why expensive MMM tools like Haus might be the next SaaS category in marketing to be axed — Platforms article on Dollar Commerce
Why expensive MMM tools like Haus might be the next SaaS category in marketing to be axed

My time owning a marketing agency with Alex has taught me a few valuable lessons. Some the easy way, through the lens of others. Some by paying the price.

One of those lessons is accepting that most software platforms have a short window of relevance, usually accompanied by a wave of scepticism. I’ve found that if you’re first to the party in a particular category, you enjoy a longer lifetime value, at least if you get the marketing right. Recharge for subscriptions, Triple Whale for attribution, Motion for strategy, you get the deal. But with LLMs evolving in-platform at such speed, there’s now a steep cliff waiting on the other side for a lot of these tools if they don’t adapt quickly.

In my last article, I talked about how Claude is eliminating the need for creative strategists by integrating directly with Meta, to the point where designers can inherit that role as part of the design flow. Next on my list? Media mix modelling tools, or MMM tools.

For context, I wouldn’t put platforms like Triple Whale or Northbeam in this category. Those are mostly used as gut-check tools with alternative attribution models to Meta or Google. Founders aren’t typically using them to model a before-and-after lift test, also known as an incrementality or hold-out test. They’re using them for directional insights on how to optimise campaigns.

How hold-out tests typically work for e-commerce brands

The best example of an MMM tool that has grown aggressively over the last 2 years is probably Haus. Built by successful ex-Google operators, they’ve created a genuinely impressive system designed to answer a simple but critical question: which variable is driving the most or least incrementality in my business?

Like any experiment, think back to science class, you ideally isolate one variable at a time. The more variables that change simultaneously, the harder it is to draw a clean conclusion. So best practice is to test incrementality one variable at a time.

Haus will also advise running each test in a controlled environment. If you’re adjusting one variable, other factors such as promotions or spend across other channels should remain as constant as possible to improve reliability.

E-commerce brands can run various versions of these tests. It might be as simple as excluding an age group, adjusting product catalogue spend, or running a geo hold-out test.

For example:

You exclude paid media in a selection of secondary states while maintaining spend in primary states. After four to six weeks, you measure the difference in revenue performance between the two groups. If the exposed states outperform the hold-out states by 12 percent after adjusting for seasonality and baseline growth, you can infer incremental lift. If they don’t, the channel may be less effective than assumed.

Conceptually, it’s not magic. It’s structured experimentation.

How do I get around expensive retainers to gauge incrementality?

It goes without saying that it’s in any business’s interest to retain a client for as long as possible, other than your divorce lawyer who bills by the hour.

One of the benefits of MMM tools is the ability to position testing as a long-term commitment. “The longer you run the test, the more reliable the data.” That’s not wrong. But it does create natural retention. If each hold-out test runs four to six weeks, and a brand wants to test five or six variables, suddenly you’re looking at a six-month engagement.

If you’re billing $10K per month with a minimum six-month contract, a few assumptions are at play:

  1. The business can afford it and sees the upside as worth the risk.
  2. The brand is large enough that even marginal improvements in media mix offset the retainer.
  3. Most importantly today, they haven’t asked AI if it can do the same thing for them.

Haus is a good example of a company that is less accessible to smaller brands, particularly those spending under $200K per month. Their model works well for larger businesses, and they’ve built a strong company accordingly.

But it’s important to remember that hold-out tests typically end in one of two outcomes. Either the variable improves performance and demonstrates scalable incrementality, or it doesn’t and performance declines.

In both scenarios, the brand inherits the risk of the experiment.

Which is why their ideal client is one that can absorb that volatility, not a poor bootstrapped brand gambling their last $10K.

So I tested it myself

To understand whether there was a simpler and cheaper way to democratise this for smaller brands, I ran an experiment with Claude.

I imported dozens of datasets: campaign-level performance, ad-level data, audience segments, age breakdowns, demographics, media types, regional splits and more. I gave Claude as much context as possible, just so it knew the brand.

What I wanted to know was simple. Could it structure, model and recommend a hold-out test with enough rigour to be useful? To my grateful surprise Claude created a full incrementality test template and outline for the brand in question.

To avoid getting myself in trouble, here’s an example of the PDF output it generated for a made up brand looking to do a geo hold-out test, in less than 5 minutes of prompting (!!!).

Why Expensive MMM tools like Haus might be the next SaaS category in marketing to be axed
An example incrementality test template which took < 5 minutes to generate

No dashboard. No six-month contract. No $10K monthly retainer.

What this means for brands like Haus

If I were head of growth or client success at a company like Haus, I’d be having some interesting internal conversations.

Even if the company is growing, AI is quietly eroding the necessity of paying for the service in the first place:

You’re likely leaning on one of three justifications when speaking to investors:

  1. Large brands are happy to pay. What’s $10K per month to a $100M business? It saves internal time on their side.
  2. The backend likely uses similar modelling logic as AI, but the Haus dashboard is much clearer to understand more user friendly for clients. Basically Northbeam for MMM.
  3. You have proprietary data aggregation capabilities that an LLM can’t replicate.

The third is the only defensible moat really, and even that depends on access, not intelligence. Although, I’d be happy to be proven wrong if the sales team at Haus (given I have worked with them before), had AI modelling capabilities greater than Anthropic.

Unlike creative strategy platforms that have bundled LLMs into diversified offerings, research tools, ad-level analysis, creative insights, recommendation dashboards, MMM is largely a structured data input and output exercise.

Which makes it more vulnerable in this space in particular.

I find it hard to see a future where even large brands don’t eventually ask, “Why can’t someone internally just run this in AI, send it to the ads team, and get this rolling today?” Then again, WeWork convinced SoftBank that $47 billion was a fair price for office space, so I won’t be the first one to say discretionary spending isn’t dead.

But then the question shifts. Is that the marketer’s job? Or the data team’s job?

Remember when designers didn’t have to be prompt engineers? Does anybody know what job is their responsibility now? While I doubt Haus or other MMM tools will die tomorrow, it goes without question that Claude and the AI models of the future, will democratise the solution they have today.

Originally published on Substack.
Comments coming soon. Comments use GitHub-backed Giscus once the repo is published. See components/Comments.jsx for activation steps.