
The Moment Training Stops Working
You've invested in training. Your people have the knowledge. So why isn't anything changing?
1. The Moment
The leadership programme ended two weeks ago. It was well-designed, well-delivered, and well received. The feedback scores were excellent. Participants said they learned a lot. Several mentioned it was the best training they'd attended in years.
You're feeling good about the investment.
Then you sit in on a team meeting led by one of the participants. You watch as they run the meeting exactly the way they always have. The same patterns. The same dynamics. The same issues that prompted you to send them to the training in the first place.
Later, you ask how they're applying what they learned. They pause. "It was really useful," they say.
"I've been meaning to try some of those techniques.
" Meaning to. But haven't.
You start paying closer attention. You notice the same pattern across the cohort. People remember the training fondly. They can recall the key concepts if prompted. But their actual behaviour hasn't shifted.
The training worked in the room. It just didn't make it out.
2. What's Really Happening
There's a fundamental difference between learning something and being able to do it under pressure.
In the training room, everything is optimised for learning. There's time to think. There's space to practise. There's a facilitator guiding the process. The stakes are low. Mistakes are expected.
Back at work, none of that exists. There's no time to think. There's no space to practise. There's no one guiding the process. The stakes are real. Mistakes have consequences.
So people default to what they already know. Not because they've forgotten the training. Because the training hasn't become automatic yet. It still requires conscious effort. And conscious effort is expensive when you're under pressure.
This is why knowledge doesn't equal capability. Knowing what a good feedback conversation looks like doesn't mean you can have one when your heart is racing and the other person is getting defensive. Understanding the theory of delegation doesn't mean you can actually let go when the project feels risky.
The gap between knowing and doing is where most training investment gets lost.
3. The Common Move (and Why It Fails)
When organisations notice that training isn't translating to performance, they usually make one of these moves.
They send people to more training.
If one programme didn't work, maybe a different one will. Maybe it needs to be longer. Maybe it needs a more experienced facilitator. Maybe it needs better content.
This rarely helps. The problem isn't usually the training itself. It's what happens, or doesn't happen, afterward. More training just adds more knowledge that won't transfer.
Or they blame the participants.
"We've tried developing our people and it didn't make a difference." "Training is a nice-to-have, not a driver of results." "Let's cut the L&D budget."
This is the most expensive conclusion of all. Because training can work. It just requires designing for transfer, not just delivery.
4. A Different Choice
Effective development doesn't end when the programme ends. It begins there.
This means building in what happens next before you build what happens during. Before designing any training content, ask: What will participants do differently when they return to work? What support will they need to do it? What will reinforce the new behaviour? What will make it easier to practise than to default?
If you can't answer those questions, the training will produce good feedback forms and little else.
In practice, this looks like:
Manager involvement before and after the training. Not just signing off on attendance, but understanding what participants are learning and actively supporting application. The research is clear: manager support is one of the strongest predictors of whether learning transfers.
Spaced practice instead of concentrated delivery. Instead of a two-day workshop, spread the learning over weeks with application tasks in between. Let people try things, encounter obstacles, and return with real questions.
Follow-up that focuses on behaviour, not just recall. Don't ask "What did you learn?" Ask "What have you tried? What happened? What got in the way?"
Peer support structures. People are more likely to apply new skills when they're not doing it alone. Cohorts, buddy systems, practice groups, whatever creates accountability and shared experience.
And measurement that goes beyond satisfaction. Track what people do differently three months after the training, not just how they felt the day it ended.
5. Practice Prompt
Think about the last training programme your organisation invested in.
Ask yourself three questions:
What were participants supposed to do differently as a result? Be specific. Not "be better leaders" but "have weekly one-on-ones with direct feedback."
What support did they receive to actually do it? Did their managers know what they learned? Did they have opportunities to practise? Did anyone follow up?
How would you know if it worked? What changed in their behaviour, their results, or their team's performance?
If you can't answer these questions clearly, that's the gap. Not the training content. Not the facilitator. The gap between learning and doing.
The next time you commission training, start with these questions before you design anything. Define what success looks like in behavioural terms. Build the reinforcement before you build the content. Measure what matters, not what's easy.
Training can change behaviour. But only if it's designed to.
Like this approach? This article is based on the framework from my #1 bestselling book, Leadership Cannot Be Automated, available on Amazon.
Tanya Davis is the founder of PELMO International and author of the #1 bestselling book Leadership Cannot Be Automated. She works with organisations across 50+ countries to diagnose and fix leadership and communication breakdowns.
