When I set out to study mathematics more than 10 years ago, it was initially very difficult. Not having practiced much mathematical thinking since my university days, there was an enormous amount of upfront struggle where I simply had to sit with frustration and uncertainty. With time and practice though, things got easier - difficult concepts became familiar, easier to work with, and easier to build upon. It was time in the saddle - that is, time spent thinking, uncertain, and actually doing the work - that made the difference.
By contrast, writing this now is difficult for me. It's been years since I've written a blog post or done any kind of long-form writing. I used to be a proficient and capable writer, but now that skill is diminished. I haven't put in the time and effort.
I bring up these fairly mundane examples to remind us of an idea which we all know to be true and completely self-evident - the way to get better at something is to spend time and effort doing it. And if you don't do something, you will get worse at it. Use it or lose it.
Thinking as a Service
AI, as it exists today, as it is used by the majority of the people in the world, is Thinking as a Service. It is a paid service, and the average person is using it to think so that they don't have to. This sounds idiotic when written in such a reductive way, but it's true; and if you talk to a teacher, high school student, university lecturer, or watch an embarrassed knowledge-worker colleague today, they will confirm it: the average person is using AI to wholesale replace thinking for themselves.
Stories like my little sister's use of chatgpt for homework is heartbreaking are not the exception. They are the norm.
-
Students at high schools and universities routinely take screenshots of their homework and upload them to ChatGPT with prompts like "Do this for me".
-
Colleagues at work are uploading emails with the prompt "Reply to this".
-
People in marketing and sales are prompting AI to create decks, sales presentations, etc.
-
Software developers are prompting AI tooling to build entire applications with minimal engagement - just delivering a stream of wants as input.
The consequence of all of this, of course, is that the average human today is spending far less time thinking, struggling, growing, and improving - we are outsourcing cognitive labour and in turn, becoming far worse thinkers.
The Value Chain Fallacy
On the weekend I was at a shop and paid cash. The cost of the food I purchased was $20.25 and I handed the cashier $50.25. I watched as she struggled for the next 15 or so seconds looking at the till, first grabbing a $20 note, then reaching for the change, pausing, hesitating, and then kind of giving up and looking at me until I told her the correct amount to give me back.
When the calculator became mainstream, the average person became substantially worse at calculation and basic arithmetic. Inevitable and expected. This is ok though, we argued, because this was just another tool we could use to augment what we do - freed up from tedious calculation, we moved "higher up the value chain". The loss of the "lower-level" skill, of calculation and basic arithmetic, was worth the trade-off it was argued, because it enabled us to spend more time and focus on the "higher level task".
But the AI tools we have access to today are nothing like the calculator. There is no higher-level place to ascend to if you outsource thinking - no higher place you can safely sit and say "I add value here". Proponents of AI disagree with that last assertion. They argue that AI triggers a role change, moving our role from being the generator/creator to instead becoming an editor. That is, our role moves away from creation of code, maths, documents, presentations, solutions, etc. to editing, where our chief contributions are now taste, judgement, and high-level strategy. Refiners of the outputs of AI.
And this is kind of true and the way many people are using AI today. But here's the thing: AI can do those things too, and better than the average person. Judgement is thinking. Taste is thinking. High-level strategy is thinking. If you use Claude Code today, for example, and ask it to output a plan given an extraordinarily high level description - ie. something which amounts to inputting almost nothing beyond mere agency/will/desire - it will, on average, output a high level plan with taste and judgement that exceeds the ability of the average person. And then it can implement that plan too, with better execution skill than the average person (and faster!). The higher level plane that AI proponents think they can ascend to where they can uniquely add value is a myth. AI today can already do that.
So why the state of things today, where people are indeed still employed in these editorial/judgement roles yet primarily just directing AI on a day-to-day basis? It's temporary. Businesses are slowly adapting and in short order will learn that where previously a team of ten was required, now it can be five. Then two. Then finally one. The layoffs have already started, and we all see it happening. It will not stop.
Furthermore, let me ask a fairly obvious question: To what extent do you think the act and skill of creation can be separated cleanly from judgement, editing, and taste? How do you develop judgement, instinct, and taste if not through years of creation, effort, and struggle? Can you audit/review/identify issues in a codebase if you've never written code?
We live in an interesting time now where many of the biggest proponents of AI indeed have that higher level skillset. But that is only due to years and years in the saddle in a pre-AI world, thinking, creating, writing, and struggling.
What about the next generation? In other words, if you raise a person who has never done any long form writing, never written a line of code, and never solved any maths problems in their life, barely engaging their brain for 20+ years of formative development, and then ask that person to edit/judge/evaluate a solution output by AI, how valuable will their evaluation be?
The Next Generation
Thinking as a Service (AI) is going to have absolutely devastating consequences for human flourishing. I think in the upcoming generation of people we will see a critical collapse of first-principles reasoning, long-form sustained thinking, struggle tolerance (the ability to sit in confusion), and epistemic humility (knowing what you don't know).
The struggle tolerance in particular will be probably the most devastating. The combination of a generation raised on ten second video clips, hyper-addicted to social media, lacking any attention span at all, combined with a trained muscle to reach for AI at that slightest need to think will obliterate the learning/struggle/improvement cycle.
In theory, the education system would hold the line here and push back and maintain standards. But education is a business and is in decline as standards are reduced to maintain the pass-rate. People entering university are now less capable than ever.
Per a recent UC San Diego report
"The report shows a rapid change over just five years. Between 2020 and 2025, the number of incoming students whose math skills were below high school level rose nearly thirtyfold and 70% of those students fell below middle school levels. This roughly translates to about one in twelve members of the freshman class."
and
“high school math grades are only very weakly linked to students’ actual math preparation.”
This isn't unique to UCSD - the story resonates with educators everywhere. If you're counting on the education system to hold the line and preserve educational standards, think again.
The Future
So I suspect that a large percentage of the knowledge-worker class will disappear in the future. With almost nothing of value to add to the act of creating products, solutions, and services, the only thing they will be able to add is simply their bodies in the form of labour. That is, the overwhelming majority of the world's population will be employed doing manual labour.
This will ripple out economically. With manual labour being low margin, low skill, and mostly undifferentiated, earning potential and economic power will swing even more grotesquely than today to the upper-upper-class ~ to the 0.001%.
A less educated population, incapable of independent thinking, will also more blindly follow what they are told and be easier to manipulate and deceive. The world will become more unstable, both economically and politically. Again, we see this taking root today.
A tiny sliver of the world's population will carry the torch of intelligence forward - they will be classically educated and trained, forced by the particulars of the elite educational institution they attend not to use AI. We may see new innovation and opportunity born here, the product of smart classically trained minds combined with AI assistance, but that is still to be determined - AI as it exists today is sufficient for the reproduction of existing knowledge, but still mostly seems incapable of scientific breakthrough or true innovation.
We have a tragedy unfolding - AI that is good enough to snuff out most learning, development, and economic opportunity, but not (yet?) good enough to create anything new to alleviate its devastating consequences.
- This post was written by Neil Sainsbury, free of any AI assistance - Tuesday, 27th January 2026.