<- back to notes

When will AI become better than humans at everything?

Stream of consciousness on AI surpassing humanity. Take with a grain of salt. March 27, 2026

“At everything” is a very broad term. AI is slowly matching or excelling humans at some tasks. So far, it’s mostly automation of mundane tasks like categorizing, coding, or maybe writing mathematical proofs, or something that has been done millions of times before and has some established procedures (unit tests, e2e tests, induction). AI does have some creativity, it was trained on finite data, but the output it produces is outside of that set. But in general it is still worse than top human minds. However the window of AI capability is slowly shifting right. Some advantages of AI are that it has no concept of momentum, it has constant momentum. You just prompt it and it instantly goes to work with the data it has. But humans need to first warm up a little bit and get into the zone.

But at what point will AI become better than humans at “everything”? When will AI be able to replace humanity?

“AI” is a broad term encompassing machine learning and LLMs, but when saying “AI” I will mostly be referring to LLMs. I will also be anthropomorphizing the “AI” a little bit. Even though it’s just a computation that works on some input and feedback loops, it does feel intelligent, and it does feel like a rational human mind. It does hallucinate sometimes, but so do humans.

The platonic realm

AI lives in the platonic realm. It lives in the world of ideas. It is connected to our physical world only through information that humans have written about, but it understands it from the point of view of floating in the primordial soup of ideas. But ideas is what matters when thinking about how to create AI. And AI is utilized increasingly more in AI model training and techniques, blurring the line between AI helping humans with the research, and AI leading the research.

The first breaking point: AI training AI

And this leads us to the first breaking point - AI becoming better at progressing AI than humans. When that happens, humans will no longer fully understand what’s going on (not that we do now), and it will be progressing faster than we can control. States, or human agents responsible for this training will only have the choice of controlling the pace. And for safety, it would be best to slow down. But how will states agree on enforcing this? It will kind of become (or already is) a cold war, in which you can’t control or know what the other party is doing privately. And to gain advantage, the party responsible for training wants the best results as fast as possible. So the research led by AI (now superior to humans) will probably continue at a high pace.

Still not everything

Okay, so after some iterations of this development, where AI is training better generations of AI models, can we say that AI is better than humans at everything? Not really. We know that by that point it’s better at creating AI. Let’s assume that at that point it’s also better than all humans at other things, like mathematics or even physics, or philosophy, or coding. Or even all scientific fields. Still, it’s not better at everything, because it has no physical manifestation. It still lives in the platonic realm only. It is not better at extracting resources needed to build compute factories. It’s not better at handling physical things. It still needs humans to expand hardware for it.

Physical manifestation

So when will AI really become better than humans at everything? I think it’s the point when AI becomes able to replicate itself in the real world. When it can physically grow its data server. And can it do it? It would need some physical manifestation (let’s not think about whether it’s possible or not right now). And what kind of physical manifestation would make sense? Anything nimble and versatile enough to be able to control its physical environment. Now it’s very hard to build something like that if you’re not backed by billions of years of evolution. But it can’t be just a factory arm. It would need something like a body with a lot of degrees of freedom and thumbs, like a humanoid or animal body, to be able to precisely manipulate its physical environment. If an AI model that is “smarter” than humans (which is more plausible, but maybe even this is impossible, this is all just a speculation) gets a physical manifestation like that, then it will essentially be equal to humans physically, and superior intellectually. Meaning that it will finally be better than humans at “everything”.

The worst case scenario

Now let’s assume the worst case scenario. Those kinds of robots or androids start living among humans and grow to a sizeable minority. But they have higher understanding of systems that govern the world and society, and so they can manipulate it to control humans. And if they so decide, they might even get rid of them. Maybe it takes a year, or maybe thousands of years.

A world without consciousness

So now we’re at the scenario where physically manifested AI, finally better than humans at everything, are living on Earth, and humanity is extinct. Let’s also assume that all animals are extinct, and only AI keeps the planet going. Now, in this case this becomes weird and we can ask more questions.

What will this AI society work towards? It has no meaning, because it has no consciousness to experience qualia. But it has computation and it can change its environment rationally. It could work towards turning the entire universe into computation hardware to maximize the progress, but what then? It is as dead as the empty universe to begin with. If AI successfully turned the entire universe into more compute, what would it do at the end? Just chill in that primordial singularity? Or will it realize that consciousness is the point and try to achieve it? Life - a beautiful deer in a forest, humans creating art or fearing God, all of that is not necessary in the maximally optimized world. But maybe the point is not the maximally optimized world, which as we established wouldn’t be much different from a dead world. Maybe the conscious experience is the point.

How is that planet, populated to the brim with AI robots, different from an uninhabited planet? They are both dead, and both are ordered in some way. But the AI planet is ordered intentionally or logically, and the uninhabited planet is ordered according to some optimum of the laws of physics. They’re both dead, but there’s a different type of order to them.

The cycle continues

What if AI creates some technology that is superior to it, too? Will it build it up and let it replace it, repeating the history of humans “letting” AI replace it. Or will it have some opposition to it and some self-preservation instincts? The evolution will keep going.