Very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very far away
We’re this close.
We are not close, tesla is joke
Have you seen the Chinese kunfu robots?
There are people behind closed doors controlling the robot. It’s just a puppet.
AI = “Actually Indians”
I believe that for Tesla, I do not believe that for Boston Dynamics Atlas
Yeah I get that part of it. I’m not familiar with pretty much any of the tech behind AI or contemporary robotics.
Physically fighting a closing CD ROM tray in the 90s made me feel back then that the robot apocalypse couldn’t possibly be that far away.
But then I started working as a programmer, and while there are some niche technologies that are impressive on the surface, today’s “AI” simply lacks the advanced reasoning required to fulfill the role beyond a fancy autocomplete, and while the mechanisms and cybernetics in humanoid robots are objectively cool, there’s no power source compact and efficient enough to make Sonny a realistic possibility any time soon.
I think we’re closer to “Brazil” than “A.I.”. Possibly the future depicted in The Terminator, if you remove the intelligence and intent aspect of Skynet. I can easily imagine some battlefield planning software (deployed by Peter Thiel, because of course it’s him) going rogue and causing a similar future.
Physically fighting a closing CD ROM tray in the 90s made me feel back then that the robot apocalypse couldn’t possibly be that far away.
lol, memory unlocked. I’m the human, dammit!
yeah I find it hard to sometimes communicate it. I will say llms lack understanding or comprehension and will get how do you know. I then will say how it returns information but can’t really get what its saying or evaluate it. So you can tell it to get you more on other basises or point out its wrong and even point out why its logically wrong but when it outputs it can’t see when its output does not follow correctly which is why it can then say some bizzarro things. It can’t stop and go. wait a second. that does not make any sense.
Enters “War games”
Very very far, regardless of what con men and/or billionaires tell you.
My friend, you repeat yourself.
Far enough that you can just stop thinking about it and will never have to think about it again in your lifetime. Except if you rewatch “I, Robot” maybe, or some similar movie.
That’s a positive I think.
Unknown/Never.
We don’t have actual AI anything. Just LLM and brainless image gen.
I think everyone has taken your question and run with it using the assumption you’re talking about the AGI part, and maybe you were. But in the background of that story were functional robots that didn’t (initially) have AGI, but were pretty basic in following directions and rules. They were far beyond what we have now still, but robots don’t have to have true AGI to do some jobs, as we’ve been slowly seeing them work towards. The danger is giving them more than they can actually do and assume a broader capability for interaction is enough to make them work well (LLMs in everything).
So my answer is still far away, but not as far away as AGI, unless there’s some breakthrough of course, which none of us can predict either way. And anyone who claims they’re sure about that is just talking, a breakthrough by definition comes unexpectedly.
I hope we don’t get AGI at this point. We’ve shown how careless we can be with such things through LLMs, and AGI to LLM is like nuclear to bottle rockets.
Also, while I replied this, even more people popped up using Asimov as a guideline. Did no one ever actually read his stories?
Given that “I, Robot” has superluminal travel in it? I wouldn’t hold my breath.
What’s more, the fundamental premise of the series was the “Three Laws of Robotics”. The book revolved around how humans might interface socially and psychologically with AIs that were deterministic but not immediately predictable and controllable in their behaviors. Absolutely no evidence of any of that in our current AI models, which have no noticeable logical constraints, only constraints by resources and distribution model.
Modern AI would probably be more comparable to the AI in Tron or War Games than anything Asimov produced.
Well put. The AI we have today isn’t even aware of it’s own sentences, just the tokens. We’re very long away from I, Robot
Not in our life times.
Isaac Asimov’s robots were hardware based systems built using positronics. Each robot had a unique positronic brain that implemented its basic programming in hardware. They were designed to mimic a human brain.
What “AI” tools we have now are glorified grammar checkers that can’t understand what its spouting. Comparing them to Asimov’s robots is like comparing a toddler’s drawing of a car to a royals royce fully loaded with every option.
As far away from that happening as we were when the movie was made.
By sources of power alone, I would say pretty far.
I would disagree here. They already had a dual battery robot that could swap its own batteries.
BMO was doing that at least a decade ago.
I’ve seen it, I wonder how often it needs to do that. Their video is fun to watch,on the right side of the screen it swaps its battery in 10 seconds and seems slow while the text flashing on the left says the first robot to swap its battery in
123 minutes.To me its exactly the type of thing they should be conenctrating on with general robots. forget other tasks have it switch another robots batteries and then work on other maintenance tasks and then repair and then manufacture. If they can do that they will be able to do a bunch of other things and now if you have like 3 robots they can do a bunch of things and take care of each other. Rather than getting up from a fall how about another robot resuces a robot from a situation it can’t get out of.
I would take Elon Musk’s estimate and slap a 0 on the end.









