The Commoditization of Competence: Depth is the Only Hedge Against AI
If you spend five minutes on Tech Twitter, you’ve heard the narrative: The era of the “Generalist” is here.
The prediction goes like this: With AI, a single person can now be a coder, a designer, a marketer, and a legal team all at once. We are told that the future belongs to the One Person Unicorn the person who is good enough at everything to create a billion dollar company.
I disagree. Actually, I believe the opposite is true. The Generalist isn’t and won’t be the winner of the AI era, they are the most at risk. AI doesn’t just fill the gaps in your skills, it raises the baseline of competence.
1. The Generalist Founder
The flaw in the One Person Unicorn argument is the assumption that the combination of a mediocre human and AI creates exceptional value.
In the pre-AI world, being a Jack of all Trades (a 4/10 at several things) was valuable because integration was expensive. If you could write decent code and decent copy, you saved the cost of hiring two people and the friction of communication.
But AI has driven the cost of 4/10 work to essentially zero.
- If an AI can write 4/10 marketing copy instantly, a human who writes 4/10 copy adds no marginal value.
- If an AI can generate 4/10 code, a full stack generalist adds no marginal value.
The One Person Unicorn may exist, but the creator of that won’t be a generalist bringing together mediocre outputs. If you just manage AI agents that do average work, you aren’t building a company, you are just operating a commodity that anyone else can replicate in five minutes.
To create leverage in an AI world, you cannot just be the manager/operator of the tools. You must provide the Alpha, the deep insight that the model cannot generate on its own.
2. (Not) Difficulty Ladder
We tend to think of difficulty is a linear scale and there is total ordering. We assume that if a computer can solve a Level 9 problem (like passing the Bar Exam or writing a complex SQL query), it must naturally be able to solve all Level 4 problems (like logical consistency or basic intuition). This is false, difficulty is not a ladder.
AI capabilities are uneven. It can solve specific classes of problems that are impossible for humans (eg. analyzing 10 million rows of data in seconds), yet fail at trivial tasks that are easy for humans (eg. maintaining context over a long conversation or understanding spatial reasoning).
The generalist assumes AI is a sufficiently smart human that is good at nearly everything. They tend to trust it blindly. Even if their first intuition is not trusting blindly their pace requires that.
The specialist understands that AI is different kind of intelligence. They know which classes of problems the AI dominates, which classes it mimics good and which classes it fails. And also they have strong(er) heuristics about pitfalls even in topics which they aren’t mastered.
If you are mediocre at everything, you don’t know where the frontier lies. You will trust the AI on a task where it hallucinates, and you will ignore it on a task where it excels. Only deep domain knowledge allows you to map the jagged edge of the tool’s capability.
3. The Leaky Abstraction (and The Terence Tao Paradox)
AI tools are actually just another abstraction over problems.
We see the Law of Leaky Abstractions: All nontrivial abstractions, to some degree, are leaky. When the abstraction works, it saves you time. When the abstraction leaks and complexity bubbles up, you are helpless unless you can understand it.
This brings us to the Terence Tao Paradox.
When a mathematician like Terence Tao uses a tool like Lean 4, the hardness is visible. Lean 4 has a brutal learning curve for most people. If Tao uses it to solve a 100 year old problem, we credit his mastery and intelligence because the tool doesn’t hide the complexity. The bond between user skill and output is obvious.
AI is different because its interface is deceptive. It uses natural language, masking the complexity of the problem it is solving. When the abstraction holds, a novice feels like a genius. But when the abstraction leaks when the AI suggests an insecure architecture, hallucinates a library, or optimizes for the wrong metric the novice is stranded. They cannot fix the leak because they don’t understand the plumbing.
The Generalist is entirely dependent on the abstraction holding. The Deep Expert uses the AI as a force multiplier but is ready to open the hood the moment the abstraction leaks.
4. The Transfer of Rigor
There is a fear that specialization traps you in that specific field. People generally think, “If I spend years mastering hard (and probably niche) topic, I won’t be able to pivot when the market changes, so I should learn popular things.”
My argument is: Deep mastery is the most transferable skill of all.
If you push yourself to reach hypothetical 9/10 level in a difficult domain whether it’s low-level optimization, distributed systems or competitive sports you learn something that a generalist never sees. You learn:
- How to create complex mental models.
- How to continue in the plateau where learning becomes painful and/or slow.
- How to distinguish signal from noise.
Once you have learned the art of mastery, applying it to a new field is faster. An expert in physics who switches to finance doesn’t start at zero, they bring a mindset of rigor that a generalist never experiences.
Engineering, at its core, is solving problems you don’t yet know how to solve. Every hard problem (hard for you, not a universal difficulty) you solve yourself is like a “level up” and increases your cognitive stamina to help you in the next level. But if you depend only on AI for the parts you can not handle yourself, you aren’t actually leveling up, you are just outsourcing. The real danger isn’t that AI will replace you, instead you will reach a point where the AI fails, and you’ll realize you never built the muscle needed to climb the rest of the way on your own.
The future belongs to the serial specialists like John Carmack: someone who goes deep, conquers a domain, and then uses same rigor to conquer the next if needed (actually I am not sure about if the market evolution rate is going to increase that much, but that’s not in the scope of this post).
5. Disposable vs. Durable Knowledge
The tactical mistake I see technical people making right now: Not utilizing Just-in-Time learning. Some people learning every new AI tool, agent framework, and prompt engineering trick released this week just because of fomo. This is a poor investment of time because of the Lindy Effect.
- Durable Knowledge: Math, physics, engineering principles have lasted many years; they will last at least 50 more.
- Disposable Knowledge: The latest AI wrapper tool will likely be obsolete in six months.
The barrier to entry for these tools is dropping to zero. If a tool is easy to use, “knowing how to use it” is not a moat. Focus on Durable Knowledge (principles, hard problems) with Ahead-of-Time learning right now. Ignore the flashy tools until you really need it.
And that doesn’t mean don’t use NotebookLM.
TL;DR
The world doesn’t need more people who are “pretty good at using AI tools.” We shouldn’t be against using AI. We should be against relying on it to mask mediocrity. AI is a powerful multiplier, but it requires a non-zero number to multiply.
And, big thanks to my friend Alperen Keleş for the sharp eyes and great suggestions.