If I remember correctly, the original Terminator story is that Skynet was put in charge of operating a vast amount of infrastructure, became self-aware and deemed humans as a threat to its goals.
It then launched a nuclear strike against them and ordered a machine army to eradicate the remaining ones.
I don't think we're that far away from that. Just the decision of someone to put an AI in charge of critical infrastructure and defense, or a series of oversights allowing an external AI to take control of it.
Looking at the past year and all the unpredicted conclusions AI came to, self-awareness is probably not needed for an AI to consider humans as an obstacle to achieve some poorly-phrased goal.
The Paperclip maximizer theory [0] comes to mind...
Oh for sure, if given AI access to critical infrastructure, lots of bad things can happen. But a self aware AI is still far away, just as a AI that can build things on its own without human intervention.
Try and error with some scripts until something sort of works and building computer chips and engines and everything else on its own is not really in the same league. Eventually we are getting there, but it is a really, long way to go.
And I use claude, too. It is impressive, but without human intervention it often gets stuck, because it lacks real understanding.
If we are getting detailed about Skynet, the plot of the first two movies (IIRC) is that there is a central Skynet that the resistance is about to destroy for good. It's only from T3 on that they describe Skynet as being distributed.
So the question is which Skynet, the one in the common conscience or the one that the continuity established via bad movies only a few people care about.
Well, we may not be confronted with a self-aware Skynet machine in the aftermath.
Maybe it'll just some dumb model in a datacenter with badly phrased objectives, which just happens to have caused severe destruction via various APIs and agents before anyone noticed...