Strategy

Why most customer marketing programs fail before they start

Three structural mistakes that undermine B2B customer marketing before it starts — measuring activity instead of revenue impact, running programs in silos that can't share signals, and treating every customer the same — with a concrete fix for each

Why most customer marketing programs fail before they start

Three specific mistakes that keep even well-intentioned programs from producing the results they should, and what to do instead.


Between Nina and myself, we spent 20+ years building customer marketing programs. I've also spent a significant portion of that time inheriting programs that weren't working, and trying to figure out why.

What I've found is that struggling customer marketing programs rarely fail because of a lack of effort or talent. The people running them are usually sharp, motivated, and genuinely care about their customers. The failure is almost always structural. It's in how the program was designed. Or more often… how it wasn't.

The same three mistakes show up over and over. If you're a customer marketing practitioner reading this, at least one of them will feel uncomfortably familiar.


01 You're measuring the wrong things — and it's costing you budget and credibility

Tell me if this sounds familiar. You send a quarterly report to leadership. It shows email open rates, event attendance, G2 reviews collected, community posts, case studies published. The numbers look solid. Leadership nods and moves on, and then cuts your headcount at the next planning cycle anyway.

The problem isn't your programs. It's your measurement framework. Open rates and post counts are activity metrics. They tell leadership what you did, not what it was worth. And in a world where every function is being asked to justify its budget in revenue terms, activity metrics aren't enough.

Customer marketing sits in a uniquely powerful position to demonstrate revenue impact, but most teams never make the connection explicit. The customers who participate in your advocacy programs: do they renew at higher rates? The accounts in your lifecycle nurture sequences: do they expand faster? Do they adopt more features? The deals where sales used a customer reference: do they close at higher rates? Do they progress faster? These are the numbers that change conversations with leadership.

If you can't answer 'what did this program do for NRR?' or 'how did it influence pipeline?' you don't have a measurement problem. You have a survival problem.

The Fix

Build a measurement framework before you build another program. Define the revenue metrics your programs should influence — NRR, expansion ARR, win rate, pipeline influence — and establish baselines. Then instrument every program to track participant vs. non-participant outcomes. This isn't a data science project; it's a discipline. Start with one metric, one program, one quarter. Make the number visible to leadership and never let it disappear from your reporting. Bonus tip: before you ever commit to acquiring a technology platform to help you carry out your work, make sure it can generate this data, and it can then be easily and dynamically exported into your "single pane of glass," whatever it happens to be.

02 Your programs don't talk to each other — so they can't compound

Here's a scenario that plays out at almost every B2B SaaS company I've worked with. The customer marketing team runs a reference program. The community team runs a community. The lifecycle team runs email nurture sequences. The CS team tracks NPS. And none of these programs are connected to each other in any meaningful way.

The reference program manager doesn't know which NPS promoters haven't been tapped yet. The community manager doesn't know which members are at risk of churning. The lifecycle team doesn't know which customers are already engaged enough to be advocacy candidates. Everyone is working hard. Nothing is compounding.

This is the silo problem, and it's more damaging than most teams realize. When programs don't share data and signals, you end up asking the same five customers for everything. You miss the warm advocates sitting in your community who've never been asked, and you miss the opportunity to bring in new voices and help them feel appreciated. You send retention emails to customers who are already expansion candidates. You build a customer advisory board and forget to loop in the CS team who knows which accounts are right for this engagement.

The programs aren't broken. The connective tissue is missing. And without it, you're running five separate programs instead of one compounding system.

The Fix

Map the signals. Start by listing every data point your team touches — NPS scores, product usage telemetry, community engagement, email response rates, renewal dates, expansion history — and ask where each one should be feeding other programs. If you don't have access to certain data points (data often sits in silos), find out how to get them and layer them into your analysis. Your NPS data should be feeding your reference pipeline. Your community engagement should be feeding your advocacy scoring and CSP expansion and churn signals. Your lifecycle stage should be determining who gets an advocacy ask and when. You don't need perfect data infrastructure to start — you need a shared signal map and the discipline to act on it.

03 You're treating all customers the same — so your programs feel generic to everyone

A new customer in their first 30 days has fundamentally different needs than a customer in their third year who's expanded twice. A customer at a 50-person startup behaves differently than a customer at an enterprise with five internal champions. A customer who's a power user of one feature and barely touched another needs a different message than a customer who's adopted broadly but hasn't gone deep anywhere.

Most customer marketing programs treat all of these customers the same. They get the same onboarding email sequence. The same monthly newsletter. The same webinar invite. The same renewal outreach. And customers notice; not explicitly, but in the way they engage. Generic programs produce generic engagement. Low open rates, low attendance, low advocacy conversion. And then the team concludes the program isn't working, when the real problem is that it wasn't designed for anyone in particular.

Segmentation is the difference between a program that gets 8% open rates and one that gets 40%. It's the difference between asking 200 customers for a case study and getting two, versus asking 20 of the right customers and getting twelve. It's not a nice-to-have. It's the foundation that everything else is built on. As the adage goes, meet them where they are.

The Fix

Start with three segments, not twenty. Pick the dimensions that matter most for your business — lifecycle stage, company size, product adoption depth, or some combination — and design meaningfully different experiences for each. 'Different' doesn't have to mean entirely different programs. It can mean different messaging angles, different CTAs, different timing, different recognition. The goal is that every customer who receives something from you feels like it was written for them, not blasted at a list. One well-segmented program will outperform five generic ones every time.


The thread connecting all three

Look at these three mistakes together, and a pattern emerges: they're all symptoms of customer marketing being designed as a set of programs to run rather than a system to build. They are also a symptom of a larger cross-functional disconnect between Marketing and customer-facing teams like Sales and Customer Success. This is a problem as old as time, and an outside eye like that of Rally can help prescribe solutions.

Because ultimately, programs can be measured in isolation, run in silos, and blasted to undifferentiated lists. But systems can't. A system requires shared signals, connected programs, clear ownership of outcomes, and a measurement framework that ties everything back to revenue. It requires more upfront design, but it compounds in a way that no individual program ever will.

If you've been running hard and not seeing the results you expect, it's probably not your effort. It's probably the architecture. The good news is that architecture can be changed, and the changes don't have to be sweeping to make a meaningful difference. Start with one measurement, one connection between programs, one segmentation cut. Build from there.

That's what a system looks like in the making.

If any of this resonates — or if you're in the middle of diagnosing one of these problems right now — I'd love to hear what you're seeing. Get in touch →

← Back to Blog Work with Rally →