Should
you?
Not a prompt engineering tutorial. Not a data literacy course. Not another breathless piece about AI productivity dressed up as education. An honest engagement with the question underneath all of those.
Why this exists, and what it won't pretend.
This course exists because I couldn't find the one I wanted when I needed it. Not a prompt engineering tutorial. Not a data literacy module designed to make you more useful to the economy that's disrupting you. Something that engaged honestly with the question underneath all of those: whether any of this is something you should be doing at all, and what it means if the answer is complicated.
So I wrote it myself. And I'm giving it away, because the conversation is more important than the revenue.
I've spent years as a project manager, navigating change with people. Not for them. Not managing communications about change from a safe distance. In the dirt with them, working through what it actually means when the ground shifts. AI is the most consequential version of that shift I've seen. And I think most of the education being offered about it is either technically narrow, genuinely naive about the stakes, or, more troublingly, designed to serve interests other than yours.
The most important question in any project is not "can we?" It's "should we?" I've never seen that question asked more urgently than it needs to be asked right now, about AI.
This course is not anti-AI. It's pro-honesty. It takes the anxiety seriously before it offers the reassurance. It asks you to think before it tells you what to think. And it ends not with a framework, because I'm not selling frameworks, but with a conversation. The capability to build your own answers comes after. This is the conversation that makes capability possible.
Take this. Share it. Argue with it. If something here is wrong, tell me. The feedback link is at the bottom of every module and it's real, not a CRM trap. The conversation has to go both ways or it isn't one.
Before you read further, what are you actually hoping this course will do for you? Name it honestly.
Tell us what came upThe anxiety is rational.
If you're worried about AI and your job, I want to say something clearly before anything else: that worry is rational. It is not catastrophising. It is not a failure of adaptability or a personality defect or evidence that you're "not a digital person." It is a reasonable response to real signals in a world where the cost of being wrong about this is not a productivity dip, it's your livelihood, your identity, your contribution to your family.
Anyone who tells you the anxiety is irrational is either not paying attention, or has something to gain from your compliance.
of UK workers are already using AI tools their employer has not officially sanctioned. Most of them have no intention of stopping. And underneath that statistic is not recklessness, it's a mixture of curiosity, desperation, and a gap between what people need and what they've been given.
CIPD / YouGov UK Workforce Survey · 2024The anxiety shows up differently in different people. For some it's sharp and specific, "my role involves report writing, and AI writes reports." For others it's diffuse and harder to name, a background hum that the world is moving in ways they haven't been told about and don't fully control. Both are valid. Both are worth naming before they're addressed.
We're also not having this conversation in a vacuum. The cost of living crisis is real. Household finances are stretched. The idea of retraining, of spending time and money to acquire new skills in a market nobody fully understands yet, carries real weight when the mortgage still needs paying and the children still need feeding. Any education about AI that ignores that context is education designed for people who can afford to be wrong. Most people can't.
The anxiety is not the problem to solve. It's information. The question is what it's telling you, and whether the people making decisions about AI in your organisation have ever actually listened to it.
There is a difference between anxiety and paralysis. Anxiety is a signal worth reading. Paralysis is what happens when the signal has no outlet, when there's nowhere to put it, no conversation to take it to, no one willing to hear it as useful information rather than a problem to be managed away. That's what this course is trying to change. Not the anxiety. The silence around it.
What specifically are you afraid of? Not the general thing, the specific one. Name it as precisely as you can.
These reflection spaces are private. Nothing you write here is saved or sent anywhere. This is for you to think on paper.
Ask your team: "When you think about AI and your job, what's the first feeling that comes up?" Don't manage the answers. Just listen.
Tell us what came upThe line everyone's repeating.
"You won't be replaced by AI, but someone using AI will replace you." You've heard this. It's in government speeches, employer communications, LinkedIn posts, and training programme brochures. It's repeated so often it's started to feel like fact rather than framing. Let's slow down on it, because buried inside that sentence are some assumptions that deserve daylight.
The job itself will remain.
Maybe. Or maybe "someone using AI" does what three people previously did, and only one job exists where there were three. The line doesn't address that.
"Someone using AI" still earns what you earn.
Not necessarily. If one person with AI tools replaces two without them, the economic pressure on salaries in that role changes significantly. Productivity gains don't automatically become wage gains, especially for workers without negotiating power.
This is primarily a matter of personal responsibility.
The framing makes structural displacement an individual problem to solve. "Learn to use AI" puts the burden on the worker, not on the organisation doing the replacing or the government overseeing the transition.
AI is a neutral tool with no agenda.
AI has owners. It has investors. It has directions it's being developed in. The interests of those owners are not always, or even usually, aligned with the interests of the workers using the tools.
None of this means the line is entirely false. There is real truth in it, people who understand AI tools and use them thoughtfully will, in many roles, have advantages over those who don't. But that truth is being used to do a lot of work that it wasn't designed for. It's being used to reassure without informing, to redirect attention from structural questions to individual ones, and to suggest that the transition is fundamentally fair when the evidence for that is, at best, mixed.
Who benefits from you believing that this is entirely your problem to solve? That's not a cynical question. It's the right one.
Cast your mind back to around 2014, when autonomous vehicles were being heralded as the transport revolution that would displace millions of professional drivers. The advice, frequently and publicly given, was "learn to code." The implication: the disruption is coming, personal retraining is the answer, the new economy will absorb you if you're adaptable enough. Ten years later, autonomous vehicles have not disrupted driving at the scale predicted. But the "learn to code" advice shaped an entire wave of education policy and personal investment. Some of it was useful. Some of it was preparation for a future that didn't quite arrive in the way described. And much of it served the tech sector's interests, in building a technically literate workforce, more than it served the drivers'.
We'll come back to that pattern in the next module.
Do you believe the line? Not whether it sounds reasonable, whether you actually believe it applies to your situation.
Private. Not stored. Just for you.
Show your team the line. Ask them: "Which of the assumptions in this sentence do you agree with, and which make you uncomfortable?" See where the disagreement lives.
Tell us what came upWhat actually happens.
Here's what actually happens, not what gets promised, but what the pattern shows. The technology arrives. The disruption is real. The jobs that disappear, disappear. And then the retraining programmes arrive, sometimes well-intentioned, often underfunded, almost always designed around what the economy needs from displaced workers rather than what displaced workers need from the economy.
This isn't a new story. The Luddites are remembered as anti-technology zealots. What they actually were was skilled textile workers who understood precisely what mechanisation would do to their livelihoods, and who were right. The machines arrived. The work changed. The promises of new opportunity were partially fulfilled, for some people, in some places, over a longer timescale than was sold. The human cost in the meantime was real and largely unacknowledged by those who benefited from the transition.
Retraining programmes are designed for the economy's needs. That's not a conspiracy, it's a design question. And it's worth asking who was in the room when the design happened.
Consider the pattern in data and AI skills programmes over the last decade. Governments and employers, often in partnership with large technology companies, invested heavily in training people in data analytics, digital literacy, and AI-adjacent skills. The people being trained were, in many cases, workers whose roles were being automated or restructured. The skills they were being trained in were, in many cases, useful to the technology companies doing the replacing, in testing AI systems, labelling training data, performing quality assurance on outputs. The displaced worker becomes, through the retraining programme, a component in the machine replacing their former colleagues.
To be clear: learning to use data tools is genuinely valuable. Digital literacy matters. This isn't an argument against skill development. It's an argument for asking, before you enrol, before you invest, before you sign up to a programme because the anxiety of not doing so is greater than the cost, who designed this, what problem it's actually solving, and for whom.
The most vocal proponents of AI retraining are often the companies most invested in AI replacement. Their interest in your upskilling is real. It is not the same as your interest in your upskilling. Those two interests sometimes align. They don't always.
Think about any AI training or upskilling you've been offered or completed. Who funded it? What did they get from you doing it?
Private. Not stored.
Ask your team: "If our organisation offered AI retraining tomorrow, who do you think it would be designed to serve?" Start with that question. See what it surfaces.
Tell us what came upShadow AI: the human story.
46% of UK workers are using AI tools their employer hasn't sanctioned. Let's look at who those people actually are, because the organisational instinct is to reach for a policy document, and the policy document will miss the point entirely if it hasn't first understood the person.
There are roughly three archetypes in that 46%, and none of them are the reckless risk-taker that "shadow AI" implies.
The frustrated one. Their tools are inadequate. The CRM is slow, the reporting system is painful, the approved software doesn't do what they need it to do. AI does. They're not trying to circumvent security, they're trying to do their job. The shadow tool is a symptom of a sanctioned tool that failed them.
The curious one. They've been paying attention. They've figured out more than their manager has, more than their organisation has officially acknowledged. They're using AI thoughtfully, carefully, and with more sophistication than the policy team writing about it. They're ahead of the organisation, and the organisation doesn't know it.
The anxious one. They're not sure AI will help, but they're terrified of being behind. They're using tools inconsistently, sometimes productively, sometimes not, in a state of low-level panic about relevance. They need a conversation and a framework, not a policy and a prohibition.
Shadow AI is not a compliance problem. It's a communication failure. The conversation about AI has happened above people's heads rather than with them, and they've responded the only way they could.
The data exposure risks from shadow AI are real. People are pasting client information into AI tools without fully understanding what happens to it. That's a genuine concern. But the response to that concern is not a policy document threatening consequences, it's the honest conversation that should have happened before the tools arrived. The organisation that discovers shadow AI and reaches for enforcement has misdiagnosed the problem. The conversation it avoided is the one it now urgently needs.
If you're one of the 46%, this isn't a criticism. It's information about where your organisation has left a gap. The question worth sitting with is not "was I wrong to do this?" but "what does my doing this say about what my organisation hasn't given me?"
If you've used an AI tool without official approval, what was the real reason? Not the practical reason. The human one.
Private. Not stored.
Create a genuine safe space and ask: "Has anyone in this room used an AI tool for work that wasn't officially approved? Not to report it, to understand why." Then listen without judgement.
Tell us what came upYour expertise is someone else's training data.
Here's something worth sitting with. The AI model that may one day compete with you, or that already is, was almost certainly trained, in part, on work like yours. Not necessarily your specific documents. But your profession's output. Legal briefs. Architectural drawings. Financial analyses. Medical notes. Engineering specifications. Project reports. The higher-quality, more structured, and more legible your professional output, the more useful it was as training data for large language models.
This is not a conspiracy theory. It is the predictable consequence of how these systems are built. Large language models learn from human-generated text. The richest, most expert human-generated text comes from knowledge work. Knowledge work is therefore disproportionately represented in what these models learned to do. And it is therefore disproportionately represented in what they're now capable of replicating.
You were consulted in absentia. Your professional output contributed to the capability that's now being positioned as a replacement for you. Nobody asked your permission. In most cases, legally, they didn't need to.
This isn't, again, an argument for not engaging with AI. The knowledge is useful whether it's comfortable or not. But it should change how you think about your relationship with these tools. When you use an AI system for work, you are potentially contributing further to its training. You are helping it understand your professional context, your terminology, your decision-making patterns. For some tools and some uses, this is explicitly how the system improves.
None of this makes AI tools unusable or the companies building them uniquely malevolent. But it does suggest that "use AI to stay relevant" deserves to be a more considered decision than it's usually presented as. The relationship between you and these tools is not neutral. It has direction. Understanding the direction is part of using them thoughtfully.
The question worth asking is not "should I use AI tools?", in most cases the practical answer is yes, in some form. The question is: on what terms? With what awareness? Contributing to what? These are questions you can only answer if you've first been honest with yourself about what the relationship actually is.
Does knowing this change how you think about using AI tools in your work? What, specifically, feels different now?
Private. Not stored.
Ask your team: "What parts of our professional output, our knowledge, our judgement, our patterns of decision-making, do you think AI systems have already learned from? How does that feel?"
Tell us what came upThe question.
We're back where we started. Should you?
Having sat through the anxiety, the framing, the historical pattern, the shadow behaviour, and the uncomfortable truth about training data, the question doesn't go away. If anything, it gets sharper. Because now it's not a rhetorical opener. It's a real question with real weight. And it's actually three questions, depending on where you sit.
Should you use AI tools in your work? For most people, in most roles, the practical answer is probably yes, in some form, with some awareness of what the relationship is. But "yes" arrived at through honest consideration is very different from "yes" arrived at through anxiety about being left behind. The tool is the same. The relationship you have with it is entirely different. One is agency. The other is compliance dressed up as adaptability.
Should your organisation be adopting AI right now? This depends on whether the conversation has happened yet. Not the strategy announcement, the conversation. With the people doing the work. About what they need, what they're already doing, what they're worried about. An AI strategy built on top of unanswered anxiety and unaddressed shadow behaviour is not a strategy. It's a risk with a PowerPoint.
Should the AI adoption you're being asked to participate in happen the way it's being described? This is the hardest question, and the most important. It requires you to hold your own interests in mind while engaging with the process. Not obstructively. Clearly. Asking what was considered when this was designed. Who was in the room. What the governance looks like. What happens if it goes wrong. These are reasonable questions. Any organisation that can't answer them doesn't have a strategy, it has enthusiasm.
Agency is not the absence of AI. It's choosing your relationship with it deliberately, with full awareness of what that relationship involves.
The "should you?" question is not a reason not to act. It's a reason to act deliberately rather than reactively. There is a version of AI engagement that is thoughtful, human-centred, honestly governed, and genuinely useful. It's just not the default. The default is faster and louder and not particularly interested in whether it's working for you.
Which version of "should you?" are you actually grappling with right now? And what would a genuinely honest answer require of you?
Private. Not stored.
What to actually do.
Not "learn Python." Not "get an AI certification." Not "follow these five productivity hacks." The advice that actually helps right now is harder to package and slower to deliver results. It's also more durable. Here's what I think you should actually do, and why each of these matters more than the things being sold to you.
Have the conversation.
With your team, with your manager, with yourself. Not the briefing, the conversation. Use the Conversation Cards if the room needs a starting point. The conversation is the capability that matters most right now, because nothing else useful happens until it's been had.
Know your value beyond what AI can replicate.
Judgement. Relationships. Context that isn't written down anywhere. Accountability that belongs to a person, not a tool. The ability to be wrong and learn from it in real time. These are not small things. Map them. Name them. Be able to articulate them clearly, to yourself first, to your employer when it matters.
Ask your employer hard questions.
What is the AI policy? How were we consulted in building it? What happens to roles that change significantly? What governance is in place for AI tools that touch client or personal data? You don't need to be adversarial. You need to be clear. These are reasonable questions. An employer who can't answer them needs to hear them more than you need to hear the answers.
Build a deliberate relationship with AI tools.
Use them. Notice what you feel when you do. Notice what changes and what doesn't. Notice what they're better at than you expected and what they consistently fail at. Build a relationship based on evidence rather than either fear or uncritical enthusiasm. Your own informed opinion is worth more than any vendor's demo.
Don't figure this out alone.
The anxiety is worst in isolation. The conversation is most useful in community. Find the people asking the same questions, in your organisation, in your sector, and beyond it. The community that Built Around Us is building exists for this. Not to sell you a framework. To have the conversation together, and build the capability that comes after.
This is the end of Level 1. The conversation has started. What comes next is capability, and capability is built, not delivered. The next level offers paths for individuals, for managers, and for decision makers who want to move from having the conversation to building something of their own from it.
But first: tell us what came up. Not your data. What the conversation stirred. What surprised you. What question you're still sitting with. What you'd add to these cards if you were writing them. That feedback is the research that makes the next thing better, and you're part of building it.
After everything in this course, what's the one conversation in your organisation that most needs to happen, and hasn't? Who needs to start it?
Tell us what came upYou've finished Level 1.
The conversation has started.
This was never about giving you answers. It was about making the questions unavoidable. What you do with them is yours to decide, and you're better equipped to decide now than you were 50 minutes ago.
Level 2, Capability, is being built from what people tell us came up in Level 1. You can shape what it becomes.