Most of what I know about marketing strategy, I learned by watching things not work.
That might sound like false modesty. It isn't. Eight years in marketing across agencies, global brands, in-house teams gives you something a textbook cannot: a long list of moments where a confident plan met reality and lost. Those moments, more than any success, are where the real learning lives.
I'm writing this partway through my MSc in Marketing Management at the University of Leeds. The degree is doing something unexpected. It's not teaching me things I didn't know; it's giving me the language to describe things I already understood intuitively. And in doing so, it's also surfacing the gaps. The assumptions I'd never examined. The frameworks I'd been applying without knowing they were frameworks.
So, this is not a career retrospective. It's a series of lessons specific, situated, and still relevant from eight years of getting things wrong in interesting ways.
Lesson One: Strategy is a decision about what not to do.
I learned this at Neo@Ogilvy, though I wouldn't have called it that at the time.
Early in my tenure there, I worked on a client account where the brief arrived with eight objectives. Eight. Each one legitimate. Each one important to a different stakeholder. The instinct, under that kind of pressure, is to try to serve all of them, to build a plan broad enough to touch every corner of what the business wanted.
We did that. And the campaign was fine. Technically competent. Measurably unremarkable.
What I understand now, and didn't then, is that a strategy that tries to serve eight objectives is not a strategy; it's a list. Real strategic thinking requires you to look at eight objectives and argue for the two that matter most. That argument is uncomfortable. It requires you to tell a senior stakeholder that their priority isn't the priority. Most people avoid that conversation. That avoidance is where strategies go to die.
The best strategists I've worked with are not the ones with the most sophisticated frameworks. They're the ones willing to make the call on what gets cut.
Lesson Two: The channel is not the answer. It's a hypothesis.
GroupM taught me how media works at scale, which also means it taught me how quickly smart people can fall in love with a channel and mistake that affection for strategy.
I saw this pattern repeatedly. A business has a growth problem. Someone in the room is confident about a particular channel - paid social, programmatic, SEO, whatever the moment's favourite was. The conversation moves quickly from 'what does the business need?' to 'how do we optimise this channel?' The strategic question gets skipped. The channel becomes the answer before anyone has properly defined the question.
SEO was not immune to this. I've spent a significant portion of my career working in and around organic search, and I've seen the same trap close around it. A brand wants more customers. Someone decides the answer is SEO. Rankings improve. Traffic grows. Conversion stays flat. Quarterly review is uncomfortable.
The issue was never the channel. The issue was that the channel was selected before the customer journey was understood, before the conversion architecture was right, before anyone had asked whether organic search was where the target audience actually was.
Channels are hypotheses. They should be selected after the strategic thinking is done, tested with appropriate rigour, and held accountable to business outcomes, not to their own internal metrics.
Lesson Three: What the brief says and what the brief means are almost never the same thing.
This one took me longer than it should have to learn, partly because I was good at executing against briefs and didn't examine them as carefully as I should have.
At GUS Global, working closer to the commercial reality of a large organisation than I had in agency life, I started to understand the gap between what a brief says and what the business actually needs. A brief that says 'increase organic traffic' might actually mean 'we're losing budget justification and need to show impact.' A brief that says 'improve brand awareness' might mean 'our sales team says prospects don't know who we are.' A brief that says 'we need better content' might mean 'our current content is embarrassing us in front of clients.'
None of those are the same problem. None of them have the same solution.
The discipline of interrogating the brief politely, systematically, early is one of the most valuable things a marketing professional can develop. It requires asking questions that can feel presumptuous, especially early in a career. 'What business problem are we actually trying to solve?' is sometimes received as an implicit criticism of the brief. It isn't. It's the question that separates a marketing partner from a marketing vendor.
Lesson Four: Data tells you what happened. It doesn't tell you why.
This is the lesson I see violated most often, and most expensively, in marketing teams.
Data literacy has improved enormously across the industry over the past decade. Most marketing teams now have access to more data than they can meaningfully use. GA4 dashboards. Attribution models. Keyword ranking trackers. Heatmaps. Customer journey analytics. The infrastructure of measurement is, in many organisations, genuinely impressive.
And yet the same mistake keeps appearing in the reporting: a metric goes up or down, and the team treats the movement as an explanation rather than an observation. Traffic dropped - the algorithm changed. Conversions improved - the new landing page is working. Return on ad spend declined - the market is getting more competitive.
Each of those might be true. None of them is necessarily true. The data shows the movement. The why requires investigation; qualitative signals, customer conversations, competitor analysis, market context. The best marketing analysts I've worked with treat every data point as the beginning of a question, not the end of one.
Lesson Five: The MSc is teaching me to name things I already knew.
I came into the Masters expecting to learn new things. I am learning new things. But the most useful effect has been different from what I expected.
When you've been practising something for eight years without formal academic grounding, you develop a kind of tacit knowledge — you know how things work because you've seen them work (and fail). But tacit knowledge is hard to communicate, harder to teach, and almost impossible to defend in a room where someone is asking you to justify your strategic recommendation.
The academic frameworks — consumer behaviour theory, brand architecture, strategic marketing models — are giving me a vocabulary. And vocabulary matters, not because it makes you sound more credible, but because it forces precision. When you have to articulate why a brand's positioning is creating cognitive dissonance in its target segment, you think more carefully about what you actually mean. You can't hand-wave.
There's also something valuable in being back in an environment where your assumptions get challenged by people who have studied the evidence, not just the practice. Some of what I've believed for years holds up. Some of it, on examination, is flimsier than I thought. That discomfort is exactly the point.
What Eight Years Actually Prepares You For
None of these lessons are particularly glamorous. They don't involve a pivot moment or a campaign that changed everything. They're accumulations; slow revisions to how I think, shaped by clients and colleagues and campaigns across a decade.
What experience actually prepares you for is ambiguity. The ability to walk into a situation where the brief is unclear, the data is incomplete, the stakeholders have competing priorities, and the timeline is short, and still make a decision. Not a perfect decision. A defensible one, made with the best available information, that you can hold to account and revise when it turns out to be wrong.
That, more than any specific skill or channel expertise, is what eight years in marketing is worth. The Master's is making me better at explaining why.