AI and Work – How Young People Are Thinking About the Future
“I’ll use my passion elsewhere if it’s not wanted at work” Fernanda (30)
"In a week there'll be three more tools I need to learn. And two of the ones I've learned — you don't need those anymore." Noah (26)
"People aren't interested in their projects anymore... it was different only a year ago" Clara (25)
There is a prevailing narrative, generally blowing from Silicon Valley right across the world, that AI will imperil huge swathes of human work, especially affecting entry level and young workers. We hear a lot about this from Dario Amodei and Sam Altman and many others. Imagine being young, at the start of your working life or a short way into it, and hearing this discourse. Does anyone stop and ask what young people themselves are thinking?
Well, that’s what we have just done at Full Moon. Right from the start we have had as part of our mission a drive to engage students and younger workers with thinking about technology and humans. We sponsor a young artist every month to do our Full Moon illustration.
But this month, we took this imperative further. We went out and talked to 25 people aged between 18 and 30, hosting in-depth group discussions to find out how they are framing AI and the changing world of work. What follows is what we heard. We found it illuminating and think you will too.
It is not a simplistic story. The young people we heard from are excited, and fearful in equal measure. They have profound questions about how their personal knowledge will develop and how deep it can be. They fret about loss of culture and loneliness in organisations. The velocity of change is a given and energising, but keeping up is exhausting. The discussion is live and constant – at work, at home, in pubs. Most of all, where will meaning from their labour come from?
A quick word on methodology.
- This is not quantitative research and we are not putting it forward as statistically reliable. It reflects conversations with a small number of real – not synthetic - people all of whom are in, looking for, or expecting, white collar jobs.
- Ages span 19 – 30, which is a broad range with very different life challenges at either end. We had a good observable gender mix, slightly biased towards women.
- Contributors found via posts on Linked In and a request to Full Moon subscribers.
- A wide range of countries including Dubai, Chile, Netherlands, Spain, UK, US, Italy, Brazil, India, Germany. The highest representation was from the UK (four) but as you will see, the viewpoints expressed were remarkably consistent across the world.
- Note: some quotes are tidied mildly from verbatim.
So let’s be clear: what follows are pointers at how some people of this generation are thinking, which can be explored further. That said if we had to put money on it — and we have seen a lot of qualitative research like this in our careers — we’d bet our conclusions are not far off for most markets, and most young people trying to find or hold down white collar jobs.
We’ve divided this into two parts. In part one, we will run through what we heard, structured into nine key themes. In part two, let’s look briefly at what ‘experts’ — from frontier labs to academics — are saying about AI and work, to see if there are patterns there, and contrast these with what our research group are thinking.
The public discourse is pretty polarised - utopian vs. dystopian. Our participants are living something more complex — and arguably more truthful. And – plot spoiler – something major is lacking.
PART ONE – THE TESTAMENT OF YOUTH
We spoke to over twenty young people in a series of six hour long conversations. They were broad-ranging and varied, but across them a series of key patterns emerged, and there were core refrains that we heard time and again.
These included concerns over the long term impact on learning of relying on AI, difficulty keeping up with the pace of change in LLMs and questions over whether AI augmented work culture would inspire them to bring their best to work. Taken together, the nine themes give a picture of the way young people are experiencing this moment with AI and work and envisaging the future.
1. Ambivalence
The most consistent messaging we heard was ambivalence about AI and its effect on work. This was true both between respondents and within individuals themselves. As one of them said – “it’s a complicated discussion”. There is a high degree of confusion and uncertainty over what AI means for work. Perhaps surprisingly, this came not with ill will towards the technology itself, but acceptance that it means change and that they need to get on board.
There was an underlying optimism about the long term: despite anxiety, no participant across six sessions concluded that human work was finished. All believed humans would remain essential — for reasons of intent, creativity, accountability, or relationships. And the future was not necessarily dark. In fact several contributors spoke of expecting in the future more time off, a higher quality of life, longer weekends, more hybrid working – in short – a better balance.
The pessimism is about the transition, not the destination.
Fernanda (30) is excited that AI frees her identity from her job, but worried about losing human connection. Memorably she says "If I cannot like, give all my passion or my knowledge in the work that I'm doing right now, maybe I can use that passion and knowledge in other stuff." Before AI, her sense of self was closely tied to her job. That is loosening.
Leo (26-30) calls AI a "superpower" but fears being exposed as shallow if anyone looks beneath the surface.
Maitena (27) has a "permanent feeling of running after AI" — exhausting but she accepts as necessary.
James says "Now I'm just trying to become more comfortable with ambiguity. I'm also getting better at telling the difference between hype/bullshit/noise and real breakthroughs".
Clara (25) loves AI (she is doing a PhD in it) but is seeing colleagues lose motivation and purpose in real time.
Christina (30) is demoralised by doing trend research alone with Copilot instead of a team of five.
Pamela (26) introduced the sharpest emotional vocabulary: she talked of "dread and grief" — grief specifically that creative output is now prompted rather than imagined. She is actively trying to pivot out of advertising because she does not want her job to become purely prompting. The word "grief" is particularly striking: it implies something has already been lost, not just threatened.
Alex (29), recently laid off from a global consulting company, candidly shared that he is feeling lost about direction — he sees AI as part of a destabilising landscape but not the sole cause. Yet later in the same session, he talked with enthusiasm about how he is using AI to build new routes to income.
His thoughts on the wider landscape were echoed by Erin (21) - "Cost of living is rising, everything's going up, and it's making me think a lot more about getting a job for the practicalities of living — not just one that I'm fully wanting to do."
This is very important context. The idealism participants express about future work — more balance, more creativity, analogue skills, human connection — is real. But it sits alongside a very practical economic constraint that particularly affects those in high-cost cities like London. The two are in tension, and the tension is likely to intensify.
Erin also raised the environmental cost of AI — energy use, data centres — as a separate source of worry. It is worth remembering that there are emerging concerns that go beyond jobs. The AI future of work debate is often framed as an isolated subject – but it isn’t.
2. Speed
Quite simply – it is exhausting keeping up. We heard this in every session from pretty much everyone. It is interesting that the mantra that the frontier model companies are driving (‘this will only accelerate’) is the lived reality of younger people trying to stay up to speed. This is probably the result of both experiencing change in what the models can do (e.g.: the recent shift to agentic) and the industry narrative, which people are paying close attention to.
Noah (26) - "In a week there'll be three more tools I need to learn. And two of the ones I've learned — you don't need those anymore."
Clara (25) - "AI is coming after me. I don't have time to understand it the way I would like."
Marcel (30) - "There's no way of getting ahead — it's a wave that's coming to get you and you don't know when or how."
Maitena (27) - "I feel like there's something new being asked of me every week. It's really stressing me out."
Lucy (25) - "We had clients say that if we didn't streamline our processes with AI, they would take their business elsewhere. The pressure to keep up was quite a lot." Important to see here that external commercial pressure rather than personal anxiety is driving the velocity complaint.
However some participants chose to embrace this: all of these are either in tech or actively building with AI tools and find the pace energising. Pamela is using Figma MCP and planning to move into AI-native design workflows. Notably, the more proactive users tend to be in entrepreneurial or technically adjacent roles.
3. Credibility
It’s a cliché — with truth in it — that older people complain they cannot fix cars any more because they do not know what’s under the hood, jammed as it is with impenetrable technology that need special computer based diagnostics.
We heard a troubling version of this across our sessions which applies to all white collar work. Participants worry they are building the AI enabled skills to deliver outputs, without building solid foundations — that they will be efficient without being expert, that they will not know how the organisational vehicle works.
One contributor said "How do they gain expertise? How….will the theoretical part of learning regain importance?"
Leo (26-30) captured it well "Senior leaders have 30 years doing things with screwdrivers and hammers. We may not get that chance. How do I establish credibility in 10 years the way someone who did the work does now?".
Or Clara (25) - "I'm learning how to make things work. Not how they work."
Or Maitena (27) - "AI has been helping us to do everything like faster and more efficiently as I said and I feel in my case in anything that I try to do - it's robbing me of the experience of doing something and learning by doing." Understanding is where experience lives.
Christina (30) is troubled by being “removed from the source of things”. More pointedly, one person asked how they can learn if they are not around older people.
Skyla (21) - a final year student - is taking steps to deal with this. She has stopped using AI for idea generation because "that's your mind, that's your brain — you can't get a robot to generate what you actually think on a topic."
What is surprising here is that this concern came most strongly from people who are pro-AI and heavy users, not from resisters. Skyla's self-imposed restriction on AI for idea generation is a remarkable example of a young person actively protecting their own cognitive development.
A fascinating corollary of this that we heard is that non-technical people are now giving technical directions with AI-generated confidence.
There are likely to be considerable and rising tensions around expertise for many years to come. It is all very well for the AI advocates to throw around the word 'democratisation' (meaning AI enables everyone) but the ebb and flow of confidence and credibility is going to be a key dynamic. Are we really arguing that depth of human understanding will not matter? That sounds both bleak and patronising.
4. Jobs
Are jobs disappearing? Yes, there seems to be real anecdotal evidence of this, compounding anxiety. Several participants moved from anxiety to actual evidence about entry-level job contraction.
Two final year students Erin and Skyla (both 21, about to graduate) spoke directly about the graduate market feeling broken. Erin said "The graduate market is already extremely difficult. There aren't enough jobs for people coming in, and that's going to get increasingly worse" and linked this explicitly to cost of living as a compounding pressure.
Noah (26) said that companies are explicitly telling him they are "not hiring entry level" and cited a claimed 30% reduction in entry-level roles.
Marcel’s (30) partner's two colleagues were let go the day after an AI agent was deployed. He told us they had no transition support after 10 years of service.
Clara (25) told us that in her workplace, whether they'll have jobs in 5–10 years is a weekly conversation.
Alex (29), recently laid off knows someone whose AI chatbot startup "exploded" because he got ahead of the curve — and explicitly wishes he had listened to his advice earlier.
In part 2 we will look at the evidence that widespread AI-driven job destruction is real, or not, and that global political instability (particularly around war, the oil price and US trade policy) may be a significant co-driver of employer caution and hiring freezes.
5. Isolation
It is not just finding jobs. Teams are shrinking in front of their eyes, and as a result work is sometimes getting lonelier.
Participants across multiple sessions described the same structural change: teams are shrinking because – they believe - AI is being used to replace headcount, not just augment it.
Fernanda (30) said it well - "Before, I thought this project needed a team of five. Right now, we're just two. It's feeling kind of alone."
Christina (30) has a similar experience. "I used to have a team of five I would work with exchanging ideas. Now I'm at times alone, just told to use Copilot."
Indeed it feels like some critical social and creative texture at work is deteriorating. It was striking that several people (though not all) commented this has happened recently. We specifically asked if things had changed in the last year – and often the answer was yes.
Clara (25) - "People aren't interested in their projects anymore... it was different only a year ago."
Alex (29) has been working on a marketing playbook for a major sports brand using AI for all the images and ideas — clients loved the low cost, but the output felt "third-rate, not first-rate." Fast, but not better. In his view, the client was unaware. "It helped us get there faster, but it didn't help us get there better".
Lucy (25) described AI-enabled efficiencies as real but experienced the new style of work as impersonal — the agency she works at could reach more clients at lower cost, but something qualitative was lost.
There is an interesting coda to this – one respondent commented that they were seeking work with smaller clients now who could afford to pay for the cheaper rates created by AI enabled, and more efficient, work. We’ve heard in the UK of at least one premium large services company going after mid tier customers for the same reason. The concept is definitely circulating in the consulting industry. The 'democratisation of consulting' narrative is being pushed mainly by AI-native startups and boutique firms rather than the Big Four themselves - firms like Xavier AI are explicitly pitching that they can deliver McKinsey-quality strategy to companies that could never afford McKinsey.
Lastly, this isolation mechanic may hint at structural issues to come. Where will staff belong in a firm? How will they organise? This was not however preying on our subjects minds; their concerns for now are closer to themselves.
6. Mirroring
AI flatters us, unless we specifically instruct it not to. That takes some doing for the average human. Most of us are needy to a degree, and AI feeds this. It is also very full in its answers.
Marcel (30) and Jasper (23) independently identified the same epistemological risk: that AI, by being so consistently affirming and comprehensive, removes the productive friction of human disagreement.
Three angles to think about with this.
Firstly Marcel "it feeds this narcissistic thing in you - whatever you do is going to be amazing", whereas in real life, as we put it back to him, your sister might say that's obviously dumb - a point Marcel immediately confirmed. Or as Jasper said "I've never had AI disagree with me. It's like bouncing ideas off a mirror."
Secondly Elena asked "The intention — who owns the intention? That is the other topic. The intent was ours, it wasn't an AI idea. That's a very important word." Elena's framing of 'ownership of intent' is a distinct category — separate from what AI can do (expansion) and what people hope for (expectations). If you are a regular Full Moon reader you may remember we highlighted Intent as a key task for designers in Who Designs The Future When Everyone Can?
This then leads us to the accountability question. Multiple participants (especially Clara (25) and Jasper (23)) circled around who is responsible when AI-assisted work goes wrong. This is emerging as a live, practical anxiety. The mirror is not a useful answer.
Third, it raised an observation that Full Moon will explore more later, but as a passing observation: humans often misunderstand each other in conversation. It is natural, and talking is a way we try to align. However what if misinterpretation itself is productive - and creates new ideas to build on?
The better AI gets, the less it will misunderstand us. The mirror becomes a cage.
7. Guidance
Essentially, institutions are not providing helpful, or in many cases any, guidance. That includes companies (a bit, but self interested), universities (unrealistic), government (none).
This was consistent across all six sessions.
Universities: Universally seen as behind. "They just say don't use it, don't use it, which is just going to get you left behind" (Skyla, 21).
Employers: are focused on efficiency and margins, not worker development or wellbeing.
Government: Irrelevant — not a single participant cited government as a useful guide.
Perhaps unsurprisingly the real sources of support were friends and peers (most common), specific newsletters or Substacks, individual trusted figures, and occasionally and ironically AI itself.
Pamela (26) introduced the term 'ethical thought leaders'. It is telling to us, that when the internet and smart phones arrived, nobody was looking to ethical thought leaders for guidance. The fact that they are now says something significant about how different this transition feels.
As a category this would be people perceived to have no financial incentive to mislead. The call for ethical guidance (rather than just expert guidance) is new and notable.
Marcel (30) specifically cited Full Moon as a trusted source precisely because "you have no benefit in transmitting these ideas" - no commercial agenda. It has certainly crossed our minds that while entrepreneurs have a number of networks they can look to for support, mentoring and funding (Unreasonable Group are a good example) nothing like this exists for the majority of younger people who work for organisations. Perhaps it is just friends and family. Yet should there be more?
8. Skills
We asked “in ten years, what will the biggest change to working life be — that most people aren't yet taking seriously?". There was a striking commonality in their answers which unhesitatingly zeroed in on skills, demonstrating that the personal challenge of AI is uppermost in their minds.
Their answers on which skills humans need to focus on, also echoed the most frequent public discourse on what remains for people to do in an AI age – tasks that are ‘soft’, collaborative and very human.
Fernanda (30) talked of emotional intelligence, storytelling, the ability to share ideas clearly and bravely.
Maitena (27) called out "taste" — knowing what looks and feels human.
Victor (26-30) cited “adaptability”; learning how to learn.
Pamela (26) said "evolve or die" - yet the evolution she envisions is towards more creativity, not less. She sees her future self running a retreat centre for women's holistic wellness. AI is the enabler, not the goal.
James is adapting: "I'm focusing less on being 'someone who can design an app/deck/etc.' and instead spending more time in niche emerging problem spaces and working closer with people. I'm not abandoning my craft, just not tying my career to a specific tool or pixel pushing."
Lucy (25) envisioned the importance of handcrafted, artisanal skills - making things with your hands - as a counterpoint to AI ubiquity. She imagines herself making jewellery, doing upholstery, painting.
Skyla (21) answered with “human connection” and balance - hoping AI will reduce the administrative burden enough to allow better work-life integration.
Elena imagined "thinking with head and hands" - the combination of theoretical knowledge and practical skill as the foundation of genuine expertise and ownership.
Some, but not all, of these imagined futures are explicitly analogue, craft-based, or human-relational - with AI positioned as an enabler of freedom from routine, not as the destination itself. This is the clearest articulation of an 'AI as liberation' narrative. In fact, we also noticed that almost no-one talked explicitly about a career future built on top of AI as the key enabler, proactively chosen.
9. Reactive
Which brings us to our last major theme - ambition and AI. A consistent pattern across all sessions, the majority of participants and their wider social circles are in reactive mode - adapting to AI as it arrives rather than explicitly getting ahead of it.
Only a handful of participants described themselves or people they knew as genuinely proactive - one is building with Claude Code, another building a business solo using AI, a third moving into Figma MCP and AI-native design.
One participant said "the only people I know that are proactive about it are people that are working very closely with it - mostly their careers are in AI and tech."
Alex (29) knows someone who built AI chatbots six years ago and "rode the curve" - but describes his own metaverse venture at the same time as a failure. The contrast between the two paths was vivid and personal.
The majority described their AI use as: using free tiers, using employer-provided tools, and using AI for admin and efficiency rather than transformation. A very few paid for an LLM personally. This is consistent with the broader research landscape: McKinsey has found that demand for AI fluency has grown sevenfold in job postings - suggesting a growing gap between what employers want and what most workers are doing.
We do not think that the reactive approach was a feature of the individuals we spoke to - they did not lack for ambition or thoughtfulness or indeed engagement with AI as an issue. Our theory from what we heard is that it is still too early in the technology cycle for most people to move beyond trying to grapple with what is going on (see the themes about Ambivalence and Speed above). Just as enterprise is largely focussed on efficiency as an AI outcome and not yet growth, so the same is true but with a different vocabulary for young people at work or trying to find it.
In summary - judging by what an international group of people who are either in education or in their first decade of work told us - AI is viewed with mixed feelings. There is hope that work/life balance will improve over time, yet fear that jobs are disappearing right in front of them. Staying on top of the intense evolution of what are becoming critical tools, demanded by employers, is hard work. No-one is really guiding the agenda or offering meaningful help. For those who have work, culture is at risk: the texture of a working environment threatened by isolated people doing less interesting work.
On the one hand, the depth of knowledge and expertise required by the future looks troublingly shallow – leading some to conclude that passion may best be placed into jobs that are not tech dependent or indeed into private hobbies. And yet the picture of the human skills which remain for us to deploy is encouraging and exciting. There is a requirement not to become fixated by the mirror that AI holds up to us and to champion intent.
The key behaviour as of now is reactive, but our view at Full Moon is that this will change over time. It always does, as we internalise and rebound from the drama of change.
PART TWO – WHAT THE EXPERTS SAY
Now that we've heard from young people directly, let's hold their testimony up against what the experts are saying and see how it fits.
Of course there is deep interplay between this, and the public narratives of AI disruption. So let’s look at these too, briefly, to understand how the future of work is being framed.
We’ve spent some time looking at what’s out there. Taken together it is as ambivalent as our contributors were. And a heads up, by the time we publish this there will have been more reports, more commentary, more discussion. Ironically we are adding to that.
Does what we hear match what “experts” are saying?
Job Disruption – Now, Soon or Never?
It is striking that our groups are all seeing layoffs and a tightening job market – especially for graduates – right now. For them it is underway.
In the public discourse it is not so clear. Roughly there are three approaches – a) expect lots of disruption, b) its more nuanced than that and unlikely to be bad and c) avoid predictions and focus on how skills will adapt and grow.
Let’s start with the apocalypse vendors. Chief among these is Dario Amodei, CEO of Anthropic. He is very clear that the one place where public concern over AI is correct is its effect on jobs. In The Adolescence of Technology a major essay he published in January 2026, he states that AI will disrupt 50% of entry-level white-collar jobs over 1-5 years (note disrupt - that does not mean all those jobs will go). This is different from previous historical disruptions because of its sheer speed and cognitive depth (it affects many jobs not just a few). It will hit entry level jobs more because the skills needed for a lot of those jobs are not that different between sectors. He concedes that AI itself will create new jobs but thinks that it (AI) will take many of them as it adapts faster to new gaps. He thinks it is not really displacing jobs yet in 2026 - but is poised to do so. He suggests three interesting strategies - that governments need to step up to look after mass displacement, that philanthropy should step up more and that firms could go on paying workers after their employment ceases.
Supporting Amodei is research from Stanford's Digital Economy Lab, led by Erik Brynjolfsson and colleagues, offers some of the most granular evidence yet of AI's effect on entry-level hiring. Using payroll data from ADP covering millions of US workers through September 2025, they found that workers aged 22-25 in AI-exposed roles - software developers, customer service representatives and similar - saw employment fall by around 16% relative to the least-exposed group, while older workers in identical roles were unaffected. In absolute terms, youth employment in high-exposure occupations dropped 6% from late 2022, while older workers in those same roles grew by 6-9%.
Crucially, the effect only appeared where AI automates tasks rather than augments them - in augmentation roles, youth employment actually grew. The pattern holds across firm types and college/non-college occupations alike, and tracks closely to the launch of ChatGPT in late 2022. These are serious economists with global reputations, and the signal is robust.
Lastly in the large scale job loss camp we have MIT / Oak Ridge National Laboratory who published The Iceberg Index in November 2025. They are looking forward and projecting not reporting on current numbers. It is an informed guess in other words.
Using a labour simulation tool mapping 151 million workers as individual agents across 32,000 skills and 923 occupations, MIT found that AI can already replace 11.7% of the US labor market - representing roughly $1.2 trillion in wages across finance, healthcare, and professional services.
The name is deliberate: the visible tip - layoffs in tech and computing - represents just 2.2% of total wage exposure; the larger mass beneath the surface lies in HR, logistics, finance, and office administration.
At the other end of the spectrum, we have those who are much more relaxed about the future of employment, and just as authoritative.
Yale Budget Lab's analysis of employment data since ChatGPT's launch finds broad stability rather than disruption, concluding that while AI anxiety is widespread, the data does not yet support it at an economy-wide level.
"While generative AI looks likely to join the ranks of transformative, general purpose technologies, it is too soon to tell how disruptive the technology will be to jobs."
And what about Open AI in all of this?
In April 2026 OpenAI explicitly warned against two errors: "overstating immediate disruption and understating long-run impact." That framing signals they're acknowledging real risk without claiming mass unemployment is imminent.
Their key numbers show that across 900+ occupations covering 99.7% of U.S. employment, their framework categorizes jobs into four buckets:
- 18% are at relatively higher short-term automation risk
- 24% may see declining employment as task composition shifts - but crucially, workers remain necessary for key tasks within those roles
- 12% could actually grow because AI lowers costs and increases utilization/access
- 46% are likely to see little change in the short term
So they're not saying "AI will reduce employment overall." They're saying the picture is fragmented: some jobs shrink, some grow, most are stable in the near term. That said, 42% exposed to risk of some kind seems high if one translates that to - say - 25% of jobs going in a decade.
And business leaders? At the World Economic Forum (WEF) in Davos, 2026, there was reported widespread pushback to the Anthropic messaging. Fortune reported that
“Amodei has been off about the rate at which technology diffuses into non-AI companies before. Last year, he projected that up to 90% of code would be AI-written by the end of 2025. It seems that this was, in fact, true for Anthropic itself. But it was not true for most companies. Even at other software companies, the amount of AI-written code has been between 25% and 40%. So Amodei may have a skewed sense for how quickly non-tech companies are actually able to adopt technology.”
Indeed WEF published a thoughtful piece on the Future of Jobs in 2025 that usefully reminded the world that other influential trends are in action too – like general economic slowdown, climate change mitigation, demographic shifts and geopolitical tensions. As we said earlier, the future of work debate is almost always framed as an isolated subject – but it isn’t. It is not all about tech.
A lot depends here on whether you believe that AI is unlike any previous disruption and will take out many different types of jobs and very soon, versus the belief that humans are endlessly inventive and new jobs will emerge, using the new technology. It is worth noting that the latter camp increasingly acknowledge that there may be some short term impact. Scott Galloway who spoke at Fortune’s Global Leadership Dinner in Davos said
”that every previous technological innovation had always created more jobs than it destroys and that he saw no reason to think AI would be any different. He did allow, though that there might some short-term displacement of existing workers.” (quote from Fortune report).
It is worth repeating that fault lines also appear between those who think current employment challenges are economically structural, and those who blame AI.
Let’s change the narrative to skills
Our groups unanimously see a change coming in what skills will be needed for work in 10 years time. Broadly they agreed that repetitive, deterministic roles (which can be done by computers as there are right and wrong answers) will be gone. The worker of the future will need a range of what we might call softer skills to thrive. They talked of “connection”, “taste” and “emotional intelligence”.
There is a lot of common ground here between what they had to say, and the public academic/business discourse on the same subject. It is possible that alignment is the result of a consensus position appearing, and our groups being influenced by “expert” public debate.
McKinsey talk extensively about the future of skills and state that
“Most human skills will endure, though they will be applied differently. More than 70 percent of the skills sought by employers today are used in both automatable and non-automatable work. This overlap means most skills remain relevant, but how and where they are used will evolve.”
They also point out that demand for AI fluency is growing faster than any other skill.
“As AI technology matures, demand for related skills is spreading beyond development roles. Demand for AI fluency jumped nearly sevenfold in the two years through mid-2025. It is now a job requirement in occupations employing about seven million workers.”
MIT Sloan see a future of worker augmentation not replacement. They distinguish between automation (task transfer to machines) and augmentation (AI enhancing human productivity). Human-intensive tasks - tasks that cannot be done effectively by machines - have actually increased in frequency between 2016 and 2024. Tasks newly added to the US labour database in 2024 show higher levels of human-intensive capability than tasks that previously existed.
They use an interesting structure they called EPOCH for their assessment. It stands for
- Empathy and Emotional Intelligence
- Presence, Networking, and Connectedness
- Opinion, Judgment, and Ethics
- Creativity and Imagination
- Hope, Vision, and Leadership.
“Each of these categories include uniquely human capabilities that enable humans to do work in areas where machines are limited.”
Encouragingly FT data analyst John Burns Murdoch in How To AI Proof Your Job points out - counter intuitively - that social skills over the last few years have delivered the best payback to workers. So the shift to softer skills that our groups see as a result of AI is actually on trend.
“When we look at employment numbers and earnings for different occupations, those that have fared best combine quantitative abilities and interpersonal skills like social perceptiveness, co-ordinating ability, persuasiveness and negotiation (a group that includes doctors, consultants, economists and, yes, even software developers, according to detailed occupational skill data). And jobs requiring strong soft skills but relatively little mathematical aptitude (among them lawyers, therapists and nurses) have fared much better than those requiring strong numerical talent but fewer social skills (among them statistical assistants and programmers).”
PART 3 - WHERE IS THE VISION?
We've heard from the young people who wanted to talk to us. We've surveyed the pronouncements of experts and commentators on the future of work, including those heavily vested in the new tech. So what to make of it all?
Step back and one thing becomes clear. At the heart of the testimony - between the notes as Debussy would have said - there is a silence. It is actually an absence. Specifically, an absence of vision.
It was striking that our conversations revealed:
- A strong belief that AI change is real and happening
- That no-one is providing useful guidance
- That young people are working so hard to keep up that they have not yet shifted from reactive to proactive mode
“On the one hand, there are people who think it’s going to be the deathblow to humanity,” says David Autor, the Daniel and Gail Rubinfeld Professor in the MIT Department of Economics. “On the other are those who think we’re about to hit the inflection point for the singularity. I don’t think it’s either of those things, but it’s some of both.”
He cuts against the grain of the doom narrative. At the heart of his argument is a distinction between automation tools (which eliminate expertise) and collaboration tools (which are force multipliers for expertise). He warns that much of current AI development is skewed toward automation, but argues a better future lies in AI designed to enhance rather than replace human judgment. His concept of 'mass expertise' - AI enabling less specialised workers to perform tasks previously reserved for experts, potentially rebuilding the middle class - is the most hopeful serious argument in the field.
He lands a key point – AI is not deterministic - we have a choice.
“The problem isn’t one of technology but philosophy. “Right now, AI is guided by this model of automation, where the goal is replicate, accelerate, overtake, and replace,” Autor says. “If that’s what you’re shooting for, you are going to design differently than if your goal is to make doctors better. It’s not really an engineering problem—it’s a design problem…..
Too often, people assume AI’s trajectory is outside of our control, Autor argues, while in fact artificial intelligence has a lot to do with human intention.
“A lot of people are much more fatalistic about this than they realize. But AI is not deciding our future, we are,” he says. “Recognizing that agency is the first step to tackling this problem in a way that is more effective for everyone.”
As we have seen the experts argue about percentages and timelines. The debate will continue long after this essay is published, and as the saying has it – only time will tell. Or will it? The problem with that cliché is that it tramples on the reality of human agency.
What the young people we spoke to are telling us is that their anxiety is not really about job numbers. It is about something harder to quantify: meaning, credibility, connection - and intent. Elena asked it directly: who owns the intention? Skyla answered it in practice, by refusing to let AI generate her ideas. Pamela named the loss precisely: grief that creative output is now prompted rather than imagined.
These are not soft concerns at the margins of the debate. They should be the debate. Because if what humans distinctively bring to work is intentionality - the capacity to decide why something is made, not just how - then the question of how we protect and cultivate that capacity is critical. We explored intent in our essay on design and AI. The argument was that as AI makes everyone a designer, intent becomes the only true differentiator. Now we can see that the same logic applies to work itself.
The good news is that every person we spoke to still had it. The ambivalence, the grief, the reactive stance: this is temporary and does not signal surrender. They are working out what to hold onto and what to let go of, and how. What they need, and what is almost entirely absent, is a vision. Autor comes closest of everyone we have looked at to a vision. When you think about what our research group told us, what is desperately needed is an empowering picture of where this all goes.
What does an AI enabled future of work look like day to day? Who gets to share in it and how? What new jobs should we prepare for (simply 'upskilling' is not a meaningful answer)? Why should I be excited? There is no shortage of doom-laden narratives, but there is a complete absence of a big, coherent vision of how this future can be brilliant for young people. Or better still, competing versions of the same. Then maybe we can choose.
It needs to be simple enough for workers (who are also voters) to grasp. It needs to be hopeful and give young people at the start of their careers a guidance system to inspire and help them proactively to embrace the potential. We need to ensure that choice is available to young people.
Right now, that’s completely lacking from the frontier model companies, governments, academia and, frankly, commentators like us.
At Full Moon, we’ll be trying to build that vision through our work in the coming year. It’s a gap that urgently needs to be filled.
And if you want to think about AI and future of work inside your organisation, then come and speak to us.
Comments