Tesla Optimus robot takes a suspicious tumble in new demo

Posted by LopRabbit 1 day ago

Counter66Comment30OpenOriginal

Comments

Comment by ojo-rojo 1 day ago

I notice when its left hand came down there was a squirt of water from probably crushing water bottle. That makes me wonder how much force these robots can exert, and if they can accidentally hurt people.

Comment by rsynnott 18 hours ago

> That makes me wonder how much force these robots can exert, and if they can accidentally hurt people.

Definitely. This thing weighs 60kg. You don't want it to fall on you.

(This is actually one of a number of things that makes me suspect that this isn't a real product or even intended to be a real product. It's too heavy and hard; it would simply not be safe for humans to be around. There are a couple of companies who seem to be gearing up to actually have some sort of limited public release of humanoid robots (generally on a "actually remote operated when doing stuff" basis); they're generally a good bit lighter and with soft coverings (though they still have disclaimers not to let them near kids).)

Comment by kh_hk 1 day ago

Oh they definitely can. A friend of mine working with humanoid robots told me that kids running around their demo booths and wanting to hug their robots were a major stress factor on doing the demos. That, plus knowing it's your code what's running there.

Comment by LorenDB 1 day ago

All observations about teleoperation aside, it's just really funny to me how the robot appears to knock over the water bottles, throw its hands up in exasperation, and then give up and fall down. It somehow makes it feel more human.

Comment by 1 day ago

Comment by rasz 1 day ago

>teleoperation aside ... feel more human

umm, so ignoring it was operated by a human it acted surprisingly human like? :)

Comment by mike_hearn 1 day ago

It might have been human operated, but it also might have just been copying its training data.

A robot that properly supports being teleoperated wouldn't immediately fall over the moment someone deactivates a headset. Falling over is almost the worst thing a robot can do, you would trash a lot of prototypes and expensive lab equipment that way if they fell over every time an operator needed the toilet or to speak to someone. If you had such a bug that would be the very first thing you would fix. And it's not like making robots stay still whilst standing is a hard problem these days - there's no reason removing a headset should cause the robot to immediately deactivate.

You'd also have to hypothesize about why the supposed Tesla teleoperator takes the headset off with people in front of him/her during a public demonstration, despite knowing that this would cause the robot to die on camera and for them to immediately get fired.

I think it's just as plausible that the underlying VLA model is trained using teleoperation data generated by headset wearers, and just like LLMs it has some notion of a "stop token" intended for cases where it completed its mission. We've all seen LLMs try a few times to solve a problem, give up and declare victory even though it obviously didn't succeed. Presumably they learned that behavior from humans somewhere along the line. If VLA models have a similar issue then we would expect to see cases where it gets frustrated or mistakes failure for success, copies the "I am done with my mission" motion it saw from its trainers and then issues a stop token, meaning it stops sending signals to the motors and as a consequence immediately falls over.

This would be expected for Tesla given that they've always been all-in on purely neural end-to-end operation. It would be most un-Tesla-like for there to be lots of hand crafted logic in these things. And as VLA models are pretty new, and partly based on LLM backbones, we would expect robotic VLA models to have the same flaws as LLMs do.

Comment by LorenDB 1 day ago

Well, the human operator was just taking off a VR headset (and presumably forgot to deactivate the robot first). It just so happened to also look like the robot was fed up with life.

Comment by LarsDu88 1 day ago

I feel many folks are missing the forest for the trees.

1. Build robots to change the narrative around overpriced stock for EV company

2. Align with right wing politicians to eliminate illegal immigration.

3. If AI for robotics is solved, congrats, you eliminated the competition.

4. If AI doesn't pan out, congrats, all the firms relying on illegal immigrants can now buy your robots and have those same illegal immigrants teleoperate the robots from their home countries.

Its like win win for amoral broligarchy

Comment by alsetmusic 1 day ago

> Even recently, Musk fought back against the notion that Tesla relies on teleoperation for its Optimus demonstration. He specified that a new demo of Optimus doing kung-fu was “AI, not tele-operated”

The world's biggest liar, possibly. It's insane to me that laws and regulations haven't stopped him from lying to investors and the public, but that's the world in which we live.

Comment by parineum 1 day ago

The FCC would have a whole lot to say if he was lying about something like that at a publicly traded company.

Comment by lesuorac 1 day ago

What would the FCC do?

The SEC didn't even enforce the whole he can't run his twitter account punishment for tweeting that he took TSLA private at 420.

Comment by parineum 1 day ago

Sorry, I meant SEC. Just search for "Musk SEC". He's been fined and sued already for similar statements. It's pretty illegal to lie about the capabilities of the products of a publicly held company.

Just look at Nikola.

Comment by digitalPhonix 1 day ago

That’s what lesuorac is saying. The SEC found he violated the rules for a publicly traded company... And then could do absolutely nothing about it to enforce the rules.

Comment by Gigachad 23 hours ago

He lies again and again. Occasionally gets a slap or a small fine. And then keeps doing it.

Comment by parineum 9 hours ago

What has he lied about? With the caveat that a prediction of the future being incorrect and an estimation of a timeline being wrong is not a lie.

An example of a lie would be the topic at hand, misrepresenting current capabilities of an existing product.

Mars or self driving cars by Year X isn't a lie.

Comment by lesuorac 3 hours ago

A lot, the easiest example was the autopilot video that started off with "The car is driving itself, the driver is there for regulatory reasons". The video was created by stitched together different sessions as in some of the sessions the car drove itself off the road into solid objects.

Or about the thai diver being a pedophile.

People just give Elon too many benefits of the doubt. Saying Mars/Self driving cars is going to be next year for over a decade is just a lie after the first couple of times.

Comment by Gigachad 8 hours ago

One instance https://www.forbes.com/sites/willskipworth/2023/12/07/elon-m...

Though some of his future predictions are obviously things he knows will not possibly happen and are as close to a lie as you can get while still being plausibly deniable.

Comment by iAMkenough 20 hours ago

That’s what DOGE was/is for. The world’s richest man’s personal effort to gut the regulatory agencies perusing him.

Anyone who can’t see that hasn’t been paying attention or is in denial, taking the lies at face value.

Comment by DoesntMatter22 1 day ago

Nothing Electrek says can really be taken seriously. They’ve openly said they have an axe to grind

Comment by 1 day ago

Comment by treetalker 1 day ago

Sub-Optimus was just worn out from its late night at E11EVEN.

Comment by energy123 1 day ago

These will be hazardous to children.

Comment by parineum 1 day ago

Not if they aren't deployed near children.

Comment by hintklb 1 day ago

Ah yes, Think of the children! [1]

[1] https://en.wikipedia.org/wiki/Think_of_the_children

Comment by NathanKP 1 day ago

No.

This is a real issue. If a robot is fully AI powered and doing what it does fully autonomously, then it has a very different risk profile compared to a teleoperated robot.

For example, you can be fairly certain that given the current state of AI tech, an AI powered robot has no innate desire to creep on your kids, while a teleoperated robot could very well be operated remotely by a pedophile who is watching your kids through the robot cameras, or attempting to interact with them in some way using the robot itself.

If you are allowing this robot device to exist in your home, around your valuables, and around the people you care for, then whether these robots operate fully autonomously, or whether a human operator is connecting via the robot is an extremely significant difference, that has very large safety consequences.

Comment by hintklb 1 day ago

I actually agree with you on this. I think those robots are going to be a huge danger for society and everyone (not only children).

But nonetheless I was pointing out that using the "Think of the children" as an argument is a push to emotions rather than a more rational thinking.

Comment by energy123 1 day ago

Tip risk of furniture is a unique risk for children and is not abstract or far-fetched.

Comment by __patchbit__ 1 day ago

Looks consistent with the imagined Star Wars C-3PO behavior after doing booboo.

Tesla AI viral inside joke?

Comment by NedF 1 day ago

[dead]

Comment by sidcool 1 day ago

Fred Lambert's blogs about Tesla are always critical and have been proven wrong many times. I would take it with a pinch of salt.

Comment by thejazzman 1 day ago

Can you elaborate? I read electrek pretty closely and if anything most of the time Tesla/Elon deny things only for it to be proven true shortly later

Like many many times..

Genuinely like to spot what I missed

Comment by moogly 20 hours ago

What should be taken with a pinch of salt? The video?