The other day, I was poking around in Copilot at work and noticed something that made me pause:
One of the suggested use cases listed in the options was “Career Coach.” That caught my attention, because career coaching is a big part of what I do – helping people figure out what’s next. So I found myself wondering: is coaching going to be another profession people say AI is coming to replace?
I tried to build a coaching agent
Out of curiosity (and maybe a bit of existential anxiety), I tried setting up an AI agent to coach me. It was easy enough – just tell it what you want it to do, set some instructions, and off it goes.
To be clear, I’m not in a crisis. I’m not looking for a new purpose. But I asked this anyway – “I feel lost and need help finding my purpose”
As it happens the response was blocked by Responsible AI settings. Which, to be honest, was a bit of a relief. Though I’m still not sure if those filters can be turned off at the org level.
So, could AI coach me?
In some ways, yes. If you know what you’re doing, you can get 80% of the way there. You can build an agent that asks open-ended questions, avoids judgement, and helps you reflect. But that’s the thing – you have to know that’s what good coaching looks like in the first place. You can even tell it to challenge you – though it won’t know when to do that unless you spell it out.
I saw this same pattern recently when I tried building an application from scratch using Cursor, an AI-powered code editor. The goal was to explore how AI had changed our development team’s workflow – looking at patterns in pull requests, commit frequency, and code changes since we introduced GitHub Copilot. The AI did a decent job generating code, but I still had to know how to handle authentication, craft the right prompts, and set up proper error logging to troubleshoot issues. I still needed to understand the fundamentals to make it work.
But that’s the catch: AI mostly does what it’s told – until it doesn’t. Yes it can follow instructions, but it can also hallucinate, misinterpret, or take unexpected leaps – especially with agentic systems that are designed to figure things out on their own. It doesn’t intuit. It doesn’t feel. It doesn’t know when to hold space or when to push. And that’s a huge part of good coaching.
The 95/5 problem
AI gets it right 95% of the time (disclaimer, I completely made this number up). But the bit it gets wrong? That can be catastrophic – especially when someone’s struggling with identity, purpose, or mental health. Coaches carry indemnity insurance for a reason. Who’s accountable when an AI gets it wrong? (If you’re concerned about AI ethics, consider tools like Anthropic’s Claude, which has strong safety commitments and transparency practices.)
Coaching isn’t just questions, it’s presence
The ICF (International Coaching Federation) has a set of core competencies and assessment markers that define what good coaching looks like. Most of them require something AI can’t replicate:
- 5.1: “Acts in response to the whole person of the client (the who).”
- 6.4: “Explores the client’s energy shifts, nonverbal cues or other behaviors.”
AI doesn’t have a felt sense of who you are. It can’t read your energy. It can’t notice the pause before you say something difficult. Even if you feed it your values and purpose, it still won’t feel them.
A few times in coaching, I’ve used a tool called “Cast of Characters” with clients – a metaphorical exercise where we explore the different internal personas and beliefs at play in decision-making. It allows clients to describe their internal dynamics while trying to make a decision without casting judgment on themselves. It’s nuanced, emotional, and requires me to “dance in the moment” – to respond to what’s emerging in real time. That kind of attunement just isn’t something AI can do. Not yet.
What AI can do well
AI has its strengths:
- It’s non-judgmental
- It asks great open-ended questions (if you tell it to)
- It can suggest structured exercises or goals
- It’s always available
For trained coaches, it can be a useful reflective aid – helping to organise thoughts, explore different framings, or spot recurring themes between sessions, assuming the data is suitably anonymised.
But it can’t role-play with nuance. It can’t use metaphor in a way that lands emotionally. It can’t sense whether a client is ready for a particular tool or conversation. That takes human presence, intuition, and trust.
Coaching is human work
In consultancy, clients often come with a solution and ask us to build it. Our job is to figure out what the real problem is. Coaching is the same. Clients say one thing, but the truth is often underneath. AI can’t dig for that truth. It can only respond to what it’s told.
And that’s what makes me sad. Not because AI is bad – but because people might turn to it instead of a human. Especially in wellness, life coaching, leadership, or career transitions like in the example above.
But I’m not worried. AI is ubiquitous. People will use it for all kinds of things. I just hope they don’t skip the kind of connection where someone looks past the surface and says, “I hear you – what’s really going on?”
Final thought
While AI can support goal setting and reflection, especially if you’re already trained to coach yourself well, it’s not a substitute for human connection. The kind that notices what’s left unsaid. That sits with uncertainty. That lets the conversation lead somewhere unplanned.
So if you’re navigating purpose, uncertainty, or big decisions about what comes next – don’t settle for algorithmic approximation. Find someone who can sit with your silence, notice what you’re not saying, and help you discover what you didn’t know you were looking for.
And if you’re curious about how that kind of coaching might help – I’d love to chat.

Leave a comment