

If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.
I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.
And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.