January 15, 2026
Insight #3 – AGI is not one thing, but two
There's Skill AGI, and there's Meta AGI (no pun intended)
Skill AGI is the AI that's being trained any human skill into it – from coding and protein folding to poetry, music composition, chess. This is the "spinal brain"
Meta AGI is the AI that can learn new skills efficiently, with minimal examples – the way humans do. This is the "prefrontal cortex"
This Skill AGI can get to ASI very quick – within years, not decades. I would argue that LLMs are already superhuman at what they do: pattern matching across vast knowledge, generating coherent text, holding context across long conversations.
The funny thing is that Skill AGI alone might be enough to transform economy. We might not need anything else except for throwing more data into current transformer architecture.
But here's the question: what are current LLMs much worse than humans at?
Learning. No human needs 1000s of examples to understand a thing. Usually it's enough to read three good examples, and then practice for 40 hours – and you're good.
Current AI is nowhere close to that. Self-created RL gyms are the best shot – that's pretty close to what our brains are doing when presented with new information – they simulate and collide it with existing concepts, resolving the "cognitive dissonance"
However, here's the contrarian take: we might not even need Meta AGI at all.
Naive scaling tells us that if you could plug Opus 4.5 directly into economy, and then RL it on the $$$ amount it makes, it would be forced to develop sample-efficient learning as a subskill – because learning fast = making money faster. The selection pressure would create the meta-capability automatically. No separate "meta-learner" architecture is needed.
Three possible futures:
- Skill AGI alone transforms everything, Meta AGI turns out to be unnecessary
- We build Meta AGI explicitly through better architectures
- Meta AGI emerges implicitly from economic selection pressure
I'm betting on some combination of 1 and 3.