Why Your CS Function Won't Scale Until You Rethink AI Entirely

The difference between a customer who churns and one who expands often comes down to one thing: whether or not the customer was seeing value.

To make that happen, a great CSM will identify what success looks like for that specific customer, will drive them towards the outcomes they actually care about, and will ensure they're realising value at every stage of the journey. The problem is that most CS teams can only afford to do this for a fraction of their customer base, and that fraction is shrinking. Customer bases are growing, but headcount budgets aren't.

The old solutions of hiring more CSMs and adding more processes, just aren't working. The model is broken and everyone is scrambling.

Now, agents can completely change that equation. Not by replacing the CSM relationship, but by making it possible for every customer, so that reaching value isn't reserved for your top tier.

The Two Solutions Everyone Is Trying (And Why Both Fall Short)

Today, most teams that are faced with a growing customer base and shrinking resources, are exploring two avenues to continue delivering value to every customer as they scale:

AI as a copilot:

  • Drafting follow-up emails, outreach, and QBR content

  • Summarising call transcripts and extracting action items

  • Building success plans or renewal risk assessments

The result: useful, but the efficiency gains are marginal. CSMs still need to know when to use it, what to ask for, and what to do with the output. The ceiling is hit quickly because the human is still doing the work, just slightly faster.

Automations:

  • Predefined journeys based on customer health

  • Triggering milestone-based emails on a fixed schedule

  • Sending a reminder to schedule key touchpoints like QBRs or renewal conversations

The result: you're scaling a process that was never truly built around the customer. Every journey is pre-defined, which means you're always routing customers down the most relevant path available, not the right one. And the moment a customer steps outside it, the whole thing breaks.

Ultimately, neither AI copilots nor automations are a perfect solution to scaling, but up until now they’ve been the best available options, by making humans marginally faster and systemising certain tasks. The reason neither approach actually moves the needle is the same: the human is still the one accountable for whether the customer reaches value.

Agents shift that accountability. Instead of supporting the human doing the work, the agent owns the outcome end-to-end, so that when a CSM does show up, it's because their presence genuinely matters. Not to chase a setup or send a nudge, but to have the kind of high-value, face-to-face conversation only a human can.

Hook's Take: What Makes a Real Agent, Not Just a Glorified Copilot

What we’ve noticed is that a lot of what gets called an 'agent' today is really just automation with better branding. At Hook, there are three standards we believe every agent must get right to earn their name.

1. Outcome Driven

To truly drive results, AI agents need to own an outcome, the same way a CSM might own value, upsell, or renewal. Agents shouldn’t just be owning isolated tasks like drafting an email or triggering a sequence, as this leaves them in the same lane as a copilot or automation. They need to own the end result, and know which tasks to execute to get customers there.

The beauty of this is that you get the scale of an automated journey, but without the rigid pre-defined paths: you set the end goal and define guardrails, and the agent builds each customer's unique route to get there. Because it's not following a linear flow, it can actually adapt to what's happening in real time, consider every customer's situation individually, and always find the next best step as things change.

You can think of enabling an agent like onboarding a new team member, you'd tell them what success looks like, and how to operate (their role’s remit and the company rules), and then they would define the exact steps they need to take to do a good job.

In practice, this is what setting goals and guardrails might look like for an onboarding agent:

Goals:

  • Week 1, Configuration: Get set up and ready to run

  • Week 2, Adoption: Admins logged in and using the basics

  • Week 3, Activation: Meaningful activity across 3 out of the 5 areas that are key in reducing renewal risk

Guardrails:

  • Communication frequency: No more than two outbound messages per week per customer

  • Content: Never make promises about product features or roadmap

  • Escalation triggers: Hand off immediately if a customer expresses dissatisfaction or requests to speak to a human

In this example, an automation would stall if admins weren’t using enough basic features by the end of week 2. However, an agent would be able to recognise that several of them have used the product before, have already demonstrated value in 2 of their 3 required areas, and just need a targeted nudge to complete the third, rather than being pushed through steps they don't need.

This illustrates the most significant way in which an outcome-driven agent differs from an automation. When an agent is in charge, the journey is not defined when the customer first embarks on it. Instead, the journey will continuously build upon itself after each executed step, using both historical and live, customer-specific data.

2. Personalised

For an agent to personalise effectively, it needs to be fed the right context: product usage data, email and call history, support tickets, etc. The more it knows, the better the judgement call it can make. Agents that do not have this level of information would be forced to make assumptions, and would therefore create generic content that doesn’t drive outcomes and operate on the same level as an automation.

Where an agent is truly superior, is that unlike an automation that has to threshold everything ("if X and Y and Z, do this"), an agent can hold multiple signals at once and decide what this specific customer actually needs at this moment. This is what allows personalisation to go so much deeper than dynamic fields in an email template.

Here’s an example: take two customers, with the same product, same agent, same week in their journey, but two completely different situations.

  • Scenario A: Restaurant, 14 days in, no technical setup started: The agent reaches out to the champion directly referencing what hasn't been done, and rather than talking about abstract features, it speaks to what matters to a restaurant specifically: promotional content, scheduling breakfast and lunch menus, making sure customers can see what's on offer before they order. It leverages the resources available to make it easier for him to get where he needs to go.

  • Scenario B: University, 14 days in, 5 of 6 screens set up: No urgency about setup, because it's nearly done. The agent acknowledges the customer's progress, flags the open support ticket on screen 6, and identifies the next area of value: scheduling content in advance for open days, freshers' week, and exam periods. It didn't need a rule that said "if screens > 4, send template C." It read the situation and made a call.

3. Human Aware

The final key requirement of a good agent is that it knows when a human needs to step in, and when it does hand the reins back to the CSM, it does so with all the necessary context. With a proper handover, the CSM jumps in knowing exactly what's happened, what's been tried, and what to do next, they're not starting from zero. This ensures a seamless customer experience.

In practice, that escalation logic has to be specific and reasoned, for example: a customer who has raised more than five high-severity support tickets and hasn't achieved their Week 2 goals gets flagged for CSM intervention, not with a vague alert, but with a full handoff. Everything that could be handled autonomously until that point, was, which protected the CSM's time for the moments where human judgement, and face-to-face conversations really matters.

These three principles don’t live in isolation, they reinforce each other. The agent can only personalise properly because it's goal-oriented and therefore not following rigid journeys. And it can only escalate intelligently because it understands both the outcome gap, and the full customer context.

Three Building Blocks to Get to an Agentic CS Function

Knowing what good looks like is one thing, getting there is another. Getting to a fully agentic CS function doesn't happen overnight, there's a maturity curve, and most teams are already somewhere on it. The question is: what's the right next step from where you are?

Step 1: Autonomous co-pilot AI

If your team is still manually prompting AI for every email and summary, the first move is making that proactive. Set it up to run without being asked every time. Not an agent yet, but the foundation. Ask yourself: What co-pilots do you already have that you could standardise and have run automatically based on a simple rule?

Step 2: Agents that own outcomes

Pick something bounded and measurable, onboarding is a good place to start. Give the agent a goal, give it context, let it run. That's where the economics start to shift. Ask yourself: What's the one thing that, if an agent owned it end-to-end, would meaningfully change your team's capacity?

Step 3: End-to-end customer journey with agent and human hand-offs

An onboarding agent hands off to an adoption agent. The adoption agent feeds into a success agent, then a renewals agent. Each one owns its slice, knows the conditions for bringing in a human, and you can reliably cover the full customer lifecycle, ensuring no outcome is missed. Ask yourself: What happens after your first agent's job is done? Who does it hand off to?

The full customer lifecycle, covered. Agents own each stage autonomously: escalating to a CSM when the situation calls for human judgement, then picking back up once resolved.

The Vision Worth Building Towards

Our vision of giving every customer a dedicated CSM isn't a distant aspiration, it's an engineering problem that's now solvable. The teams making progress aren't waiting for a perfect strategy or a full transformation roadmap. They’re slowly starting to get ahead in two ways. First, they've stopped thinking in tasks and started thinking in outcomes. Second, they're being deliberate about where humans add the most value, and choosing their agents and automations around that (rather than expecting humans to cover everything and using AI to make that slightly more bearable).

The question isn't whether AI can change the economics of your CS function. It's whether you're willing to change the model, not just the tools.

The technology to give every customer, at every tier, a dedicated CSM, exists today. The only thing left is deciding when to start using it.

See Outcome-Driven Agents in Action