← Back to all posts

Reflecting on my top 5 modules: Contemporary Issues

University & Life 16 May 2026 · 10 min read

If you've been following along, I'm doing a five-part series on the modules from my Level 6 Project Management Degree Apprenticeship at Northumbria University that actually changed how I work. Last week was Business Strategy, the one that rewired how I think. This week is a different beast.

Contemporary Issues in Project Management.

I'll be honest, when I first saw the module name I thought it was going to be one of those filler modules. You know the type. Broad enough to mean anything, vague enough to mean nothing. A few lectures on "the changing landscape" with some buzzwords thrown in and a group discussion where everyone nods along pretending they've done the reading. I was wrong. Quite wrong, actually.

The module covered emerging technology, skills gaps in business, working practices, sustainability, and data governance. All things that sound like LinkedIn headline bait until you actually have to sit with them and apply them to your own organisation and your own projects. That's when it gets uncomfortable, because you start realising how much of this stuff you've either been ignoring or winging. I was doing a healthy mix of both.

AI, trust and the room that went quiet

Let's start with the big one. AI was a huge part of this module, and the timing was perfect because I'd just been involved in delivering an AI proof of concept for a public sector client.

The idea was solid. We were using AI to surface risk signals and support decision-making in an environment that's traditionally very manual, very process-heavy and built on decades of professional expertise and human judgement. The kind of environment where people have earned their instincts through years of doing the job. Then we rock up and essentially say "hey, we've built a thing that can help with that."

You can probably guess how that went down.

The pushback wasn't hostile. It was something more difficult to deal with than that. It was fundamental. Some people didn't trust the technology. Some didn't understand what it was actually doing under the hood. And some, quite reasonably, just didn't want it. They'd been making good decisions for years without a machine telling them what to think, and they weren't about to hand that over because someone in a tech consultancy said it was the future.

I remember sitting in assurance workshops thinking, this isn't a technology problem. This is a people problem. And that realisation changed how I think about AI entirely.

The hardest part of AI in delivery is never the technology. It's the human bit. Running demos patiently, answering the same questions without getting frustrated, being honest about what the tool can and can't do.

What I took from it, and what I genuinely believe now, is that the hardest part of AI in delivery is never the technology. It's the human bit. It's running demos patiently, answering the same questions without getting frustrated, being honest about what the tool can and can't do. It's accepting that trust takes time and you can't shortcut it. You can't PowerPoint your way past decades of professional instinct. You have to meet people where they are and let them come to it in their own time. Some will. Some won't. Both are fine.

The module gave me the language and the frameworks to understand why that friction exists. The "black box" problem — where the more powerful your AI model becomes, the harder it is to explain how it reached its conclusions. The governance paradox, where the tool that's meant to improve decision-making actually makes oversight harder because nobody can fully trace the logic. In public sector work, where every decision needs to be auditable and defensible, those aren't theoretical concerns. They're the things that will kill your project if you don't plan for them from day one.

But here's the thing I keep coming back to: none of that means AI isn't worth pursuing. It just means you have to be honest about the reality of implementing it, not the shiny version you see in the sales deck. I think the organisations that will get the most out of AI aren't the ones moving fastest. They're the ones being most deliberate, building proper foundations, investing in governance, and bringing their people along with them rather than dragging them.

How I actually use AI now (and why I think every PM should be)

I should probably clarify at this point that I'm not building machine learning models or writing Python in my spare time. I'm a Delivery Manager. But AI has genuinely changed how I work, and this module is a big reason why.

I use Claude every day, both my personal account and my work account. It's become part of how I think through problems, draft communications, structure my approach to things, and pressure-test my own ideas. I'll throw a half-formed thought at it and use the response to sharpen my thinking, not replace it. That distinction matters to me. I'm not outsourcing my brain. I'm giving it a sparring partner.

I use Copilot from time to time as well, but Claude's my go-to. I also read the TLDR newsletters religiously to keep up with what's moving in the industry, and I'll dig into their site when something catches my eye. I'm naturally curious. If there's a new tool or a new way of doing something, I'll try it. Most of the time it's noise. Occasionally it's genuinely useful. But you don't find out which is which unless you actually have a go.

That curiosity was always there, but this module gave it direction. Before it, I was aware AI existed (hard not to be when every other LinkedIn post is about it). After it, I understood enough to have an actual opinion. To ask better questions in conversations about AI. To spot where it could genuinely add value on a project versus where it's being crowbarred in because someone read an article and got excited. That feels like a meaningful shift for a PM, because we're increasingly going to be the people in the room who have to make practical decisions about this stuff on live projects with real budgets and real consequences.

The unsexy thing that actually determines whether your project works

The other part of this module that properly stuck with me was data. I know. Incredibly glamorous. But this is the bit nobody wants to talk about, and it's honestly the bit that matters most.

On one of my projects, we lost entire sprints to data cleansing. Not building features. Not shipping things clients could see. Cleaning data. Because the foundations weren't in place and nobody had properly accounted for the effort in the plan. It's the kind of work that never makes it into the case study or the project highlight reel, but it's what actually determines whether your fancy AI tool, your reporting dashboard, or your new feature works properly or falls over on day one.

Data isn't someone else's problem. If you're managing the delivery, it's your problem — because when the data's wrong, everything downstream is wrong and you're the one explaining to the client why things are slipping.

This module completely reframed how I think about data. Not as a technical problem that the developers sort out, but as a strategic asset that needs governance, ownership, and deliberate investment from the start. Data isn't someone else's problem. If you're managing the delivery, it's your problem, because when the data's wrong, everything downstream is wrong and you're the one explaining to the client why things are slipping.

It's one of the reasons I've become quite data-driven in how I manage projects now. I pull metrics from Azure DevOps — cycle times, throughput, work in progress, all the usual stuff. But I also build my own tracking and reporting on top of that. Not because I don't trust the tooling, but because the out-of-the-box metrics don't always tell the story you need for a specific client or a specific conversation. Sometimes you need to shape the data yourself to make a point land.

When I walk into a stakeholder meeting now, I'm backing up recommendations with evidence, not gut feel. If I'm saying we need to adjust scope or push a deadline, I've got the numbers to show why. That shift — from opinion-led to evidence-led — started in this module. And it's probably the single most practical thing I took from it.

Why this module matters more than the name suggests

Contemporary Issues sounds like a module you endure rather than enjoy. I get it. But for me, it was the one that connected the academic side of the degree to the actual industry I work in, right now, in real time.

The AI landscape changes weekly. I'm not exaggerating. Working practices are still evolving. Data governance is becoming a genuine differentiator, not just a compliance checkbox. Sustainability is creeping into procurement criteria in ways that will catch people off guard if they're not paying attention.

As a Delivery Manager, I can't afford to coast on what I knew six months ago. This module was the thing that made me start actively keeping up — not just skimming headlines but actually thinking about what these shifts mean for my projects, my clients, and my own career.

I'm still learning. That's kind of the whole point. But at least now I know what I'm looking for and, more importantly, I know what questions to ask when I find it.

Next week, Programme, Portfolio and Project Management. The one that made me zoom out.