Tesla Optimus robot takes a suspicious tumble in new demo
Posted by LopRabbit 1 day ago
Comments
Comment by ojo-rojo 1 day ago
Comment by rsynnott 18 hours ago
Definitely. This thing weighs 60kg. You don't want it to fall on you.
(This is actually one of a number of things that makes me suspect that this isn't a real product or even intended to be a real product. It's too heavy and hard; it would simply not be safe for humans to be around. There are a couple of companies who seem to be gearing up to actually have some sort of limited public release of humanoid robots (generally on a "actually remote operated when doing stuff" basis); they're generally a good bit lighter and with soft coverings (though they still have disclaimers not to let them near kids).)
Comment by kh_hk 1 day ago
Comment by LorenDB 1 day ago
Comment by rasz 1 day ago
umm, so ignoring it was operated by a human it acted surprisingly human like? :)
Comment by mike_hearn 1 day ago
A robot that properly supports being teleoperated wouldn't immediately fall over the moment someone deactivates a headset. Falling over is almost the worst thing a robot can do, you would trash a lot of prototypes and expensive lab equipment that way if they fell over every time an operator needed the toilet or to speak to someone. If you had such a bug that would be the very first thing you would fix. And it's not like making robots stay still whilst standing is a hard problem these days - there's no reason removing a headset should cause the robot to immediately deactivate.
You'd also have to hypothesize about why the supposed Tesla teleoperator takes the headset off with people in front of him/her during a public demonstration, despite knowing that this would cause the robot to die on camera and for them to immediately get fired.
I think it's just as plausible that the underlying VLA model is trained using teleoperation data generated by headset wearers, and just like LLMs it has some notion of a "stop token" intended for cases where it completed its mission. We've all seen LLMs try a few times to solve a problem, give up and declare victory even though it obviously didn't succeed. Presumably they learned that behavior from humans somewhere along the line. If VLA models have a similar issue then we would expect to see cases where it gets frustrated or mistakes failure for success, copies the "I am done with my mission" motion it saw from its trainers and then issues a stop token, meaning it stops sending signals to the motors and as a consequence immediately falls over.
This would be expected for Tesla given that they've always been all-in on purely neural end-to-end operation. It would be most un-Tesla-like for there to be lots of hand crafted logic in these things. And as VLA models are pretty new, and partly based on LLM backbones, we would expect robotic VLA models to have the same flaws as LLMs do.
Comment by LorenDB 1 day ago
Comment by LarsDu88 1 day ago
1. Build robots to change the narrative around overpriced stock for EV company
2. Align with right wing politicians to eliminate illegal immigration.
3. If AI for robotics is solved, congrats, you eliminated the competition.
4. If AI doesn't pan out, congrats, all the firms relying on illegal immigrants can now buy your robots and have those same illegal immigrants teleoperate the robots from their home countries.
Its like win win for amoral broligarchy
Comment by alsetmusic 1 day ago
The world's biggest liar, possibly. It's insane to me that laws and regulations haven't stopped him from lying to investors and the public, but that's the world in which we live.
Comment by parineum 1 day ago
Comment by lesuorac 1 day ago
The SEC didn't even enforce the whole he can't run his twitter account punishment for tweeting that he took TSLA private at 420.
Comment by parineum 1 day ago
Just look at Nikola.
Comment by digitalPhonix 1 day ago
Comment by Gigachad 23 hours ago
Comment by parineum 9 hours ago
An example of a lie would be the topic at hand, misrepresenting current capabilities of an existing product.
Mars or self driving cars by Year X isn't a lie.
Comment by lesuorac 3 hours ago
Or about the thai diver being a pedophile.
People just give Elon too many benefits of the doubt. Saying Mars/Self driving cars is going to be next year for over a decade is just a lie after the first couple of times.
Comment by Gigachad 8 hours ago
Though some of his future predictions are obviously things he knows will not possibly happen and are as close to a lie as you can get while still being plausibly deniable.
Comment by iAMkenough 20 hours ago
Anyone who can’t see that hasn’t been paying attention or is in denial, taking the lies at face value.
Comment by DoesntMatter22 1 day ago
Comment by treetalker 1 day ago
Comment by energy123 1 day ago
Comment by parineum 1 day ago
Comment by hintklb 1 day ago
Comment by NathanKP 1 day ago
This is a real issue. If a robot is fully AI powered and doing what it does fully autonomously, then it has a very different risk profile compared to a teleoperated robot.
For example, you can be fairly certain that given the current state of AI tech, an AI powered robot has no innate desire to creep on your kids, while a teleoperated robot could very well be operated remotely by a pedophile who is watching your kids through the robot cameras, or attempting to interact with them in some way using the robot itself.
If you are allowing this robot device to exist in your home, around your valuables, and around the people you care for, then whether these robots operate fully autonomously, or whether a human operator is connecting via the robot is an extremely significant difference, that has very large safety consequences.
Comment by hintklb 1 day ago
But nonetheless I was pointing out that using the "Think of the children" as an argument is a push to emotions rather than a more rational thinking.
Comment by energy123 1 day ago
Comment by __patchbit__ 1 day ago
Tesla AI viral inside joke?
Comment by NedF 1 day ago
Comment by sidcool 1 day ago
Comment by thejazzman 1 day ago
Like many many times..
Genuinely like to spot what I missed
Comment by moogly 20 hours ago