When we inherited the reference program, there were five customers doing all the work. Same names. Every vertical. Every deal stage. Every quarter.
Their CSMs had stopped asking because they felt bad about it. Sales had stopped trusting the program because the answer was always the same five people — and prospects could tell. Not to mention, it made us look smaller than we are.
The acceptance rate was sitting at roughly one in three.
WHY THAT NUMBER MATTERS
Below 50% tells you one of two things: your matching is wrong — you're asking the right customers for the wrong asks — or your advocate relationships are overdrawn, meaning you've been withdrawing without depositing. For us, it was both.
A low acceptance rate isn't a relationship problem. It's a data problem wearing a relationship problem's coat. When you dig into the pattern, two root causes almost always emerge.
You're over-indexing on the same handful of highly visible promoters while a much larger pool of willing advocates sits unactivated — unscored, untiered, and never asked. We audited every reference touch over the previous 18 months. The same accounts had been asked dozens of times. Meanwhile, we had over 200 active promoters — customers with NPS scores of 9 or 10 — who had never been contacted for a single reference call.
THE REBUILD
We rebuilt from the data. The goal wasn't a bigger bench — it was a smarter one. That meant scoring the full customer base on the signals that actually predict a good reference, not just willingness to say yes.
We scored on NPS and product adoption depth, ICP profile fit, and advocacy history. Then we built tiers. And we created an onboarding process for new advocates that started with what they wanted from the relationship — not what we needed from them.
"The ask comes after the value, not before. Peer access, co-marketing opportunities, early product input — that's the deposit. The reference call is the withdrawal."
The scoring model weighted a handful of signals that consistently separated good references from overextended ones: exact industry match, company size proximity to the deal, NPS score, community engagement role, open support tickets, recency of the last advocacy ask, case study presence, and tenure.
Once tiers were set, the matching happened automatically. Sales stopped getting five of the same names. They started getting three names that actually fit the deal — and acceptance rates followed.
THE NUMBER THAT MOVED SALES LEADERSHIP
The number that changed the conversation with sales leadership wasn't bench size. We doubled it, and they nodded politely. It wasn't acceptance rate either, even though we moved it from one in three to well above 60%.
"Deals that included a reference call closed at a rate that was at least 2x vs. the ones that didn't."
That's the number that gets customer marketing a seat at the revenue table. Not "we have more advocates." Not "our NPS improved." Win rate impact. Everything else is context for that headline.
When you can walk into a QBR and say every deal that touched the reference program closed at twice the rate of deals that didn't — the conversation about headcount, tooling, and investment changes completely. You're no longer defending a program. You're reporting on a revenue driver.
But don't take our word for it — here's one of our past employer's customers talking about what genuine advocacy feels like from the inside:
"Sharing my successes has helped me build relationships with other customers. We'll bounce ideas off of each other like, 'Hey, this is what we're struggling with.' It's been really great for keeping us ahead of the curve."
Noah Brooks — Head of Digital Marketing, University Hospitals
That's what a healthy reference relationship looks like. Noah isn't doing anyone a favor. He's getting something out of it. That's the only kind of advocacy that scales.
Reference programs that scale aren't built on goodwill. They're built on data, relationship infrastructure, and a clear answer to what advocates get out of it. More on that Wednesday.