cross-posted from: https://fedia.io/m/[email protected]/t/2201156
In case you were worried about the roads being too safe, you can rest easily knowing that Teslas will be rolling out with unsupervised “Full Self Driving” in a couple days.
It doesn’t seem to be going great, even in supervised mode. This one couldn’t safely drive down a simple, perfectly straight road in broad daylight :( Veered off the road for no good reason. Glad nobody got badly hurt.
We analyze the onboard camera footage, and try to figure out what went wrong. Turns out, a lot. We also talk through how camera-only autonomous cars work, Tesla’s upcoming autonomous taxi rollout, and how AI hallucinations figure into everything.
There’s a lot wrong with Tesla’s implementation here, so I’m going to zoom in on one in particular. It is outright negligent to decide against using LIDAR on something like a car that you want to be autonomous. Maybe if this car had sensors to map out 3D space, that would help it move more successfully through 3D space?
You and I are (mostly) able to safely navigate a vehicle with 3D stereoscopic vision. It’s not a sensor issue, it’s a computation issue.
If I eventually end up on a fully self driving vehicle, I want it to be better than what you and I can do with our eyes.
Is it possible to drive with just stereoscopic vision, yea. But why is Tesla against BEING BETTER than humans?
That I agree with 100%
Computation NOW cannot replicate what humans do with our rather limited senses.
“Self-driving” cars are being made NOW.
That means it’s the NOW computation we worry about, not some hypothetical future computation capabilities. And the NOW computation cannot do the job safely with just vision.
Did I give you the impression that I was talking about some other time than NOW?
You were babbling about non-existent computing horsepower, yes.
Buh-bye. You’re not worth engaging any further. Go kneel before Kaptain Ketamine and service to your heart’s content.
In theory maybe, but our brains are basically a supercomputer on steroids when it comes to interpreting and improving the “video feed” our eyes give us.
Could it be done with just cameras, probably some time in the future, but why the fuck wouldn’t you use a depth sensor now, and even in the future as a redundancy.
Yeah I mean that’s what I said.
I can also identify a mirror. Tesla smashed that head on. If you can’t effectively understand the image then it’s not enough information.
My point exactly.
No, it’s just not able to process the information it has.
I get what you are saying. But adding a new form of data input into the system would probably function to improve performance, not decrease it. I don’t think it makes sense to not add LIDAR into Teslas.
All of this feels like Elon was asked to justify not putting a rather expensive (at the time) set of sensors into the Teslas, and he just doubles down and says that they will compensate with software.