Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When it gets there, regulatory approval will absolutely be a bottleneck to deployment. They don't say it is a current blocker.

And it won't come close to Theranos. Tesla makes real products that are class-leading. Even if Tesla can't reach level 5, it will be damn close and make driving 10-100x safer than just a human.



This is an absolute abuse of language. Can I say that my backyard nuclear fusion reactors are held back by regulatory approval? Surely when I finally get around to building a working one, I will have to jump through those pesky hoops.


Sure, but to even mention it now is disingenuous because they’re not even close to having a solution that their own engineering department would be willing to ship.

You and I have no idea whether it’s possible to get close to level 5 with their currently shipping hardware. Neither do they. And this stuff about being 10-100x safer than a human is pure fantasy right now. The industry is incredibly far away from that and there’s no evidence to suggest Tesla is years ahead of other teams working on the problem.


10x is within striking distance. Search for Autopilot reporting for the evidence. While it is biased towards highway miles, all the safety features augment the human driver. This will only get better with time.

Shipping is different from functional. You don't know what their engineering department thinks. Unless you are an insider, it is hard to guess the timeline, trajectory, or confidence levels.


>Unless you are an insider, it is hard to guess the timeline, trajectory, or confidence levels.

You realize that as a public company who is selling this product, they are obligated to spell this out to consumers and shareholders, right? The entire point is for it to be unambiguous.


Yet you are confident of that "10x" statistic. Therefore: are you a) leaking inside data, or b) pulling numbers out of thin air?


>When it gets there, regulatory approval will absolutely be a bottleneck to deployment. They don't say it is a current blocker.

Elon regularly states that the chief blocker for Tesla is regulatory approval. Meanwhile Teslas still drive straight into overturned trucks.


> Meanwhile Teslas still drive straight into overturned trucks.

To be fair, so do people.


And people driving into things is considered to be a problem, not an insignificant quirk that's almost unworthy of mention.


In 2017, the last year we have data for, 10,230 accidents hit a stationary object, representing 30% of all fatal crashes.

Tesla has had a total of 5 fatal crashes in its history, and only one of them was hitting a stationary object (20%).

So objectively Tesla is doing better than humans.


You also need to figure out how many of those crashes are intentional suicides or a result of something else (such as someone having a heart attack and losing control of their car).

Basically, you would need to figure out how many of those accidents hits stationary objects on the highway that are the drivers fault. That's what Tesla cars have done. I don't think those stats are available.


How many miles driven (which is a probably more correct measure of exposure) are represented in the two cases?


And it won't come close to Theranos. Tesla makes real products that are class-leading.

Class-leading in what sense(s)?

Even if Tesla can't reach level 5, it will be damn close

But that's the problem with self-driving cars. Damn close isn't good enough. A miss is as good as a mile.

The problem with the self-driving/automation scale is that anything around levels 2-4 probably shouldn't be allowed on public roads, at least not yet.

Basic driver aids, where the driver is always fully engaged but the system can help to avoid mistakes, are proven to improve safety. This is what you get at level 1, and such technologies are already widespread in the industry.

If we can ever make a fully autonomous vehicle that can genuinely cope with any driving conditions, so you don't need any driver or controls in the vehicle any more, then obviously this has the potential to beat human drivers. This is level 5. But we don't know how to do this yet, and I have seen absolutely no evidence so far that anyone will know how to do it any time soon either.

In between, we have several variations where a human driver is required for some of the monitoring and control of the vehicle but not all. This has some horrible safety implications, particularly around the transitions between human- and vehicle-controlled modes of operation, and around creating a false sense of security for the human driver. The legal small print will probably say that they must remain fully alert and able to take over immediately at any time, but whether it is within human capability to actually do that effectively is an entirely different question.

and make driving 10-100x safer than just a human.

I've been driving for more than 25 years, and racked up hundreds of thousands of miles behind the wheel. I've never caused an accident, as far as I'm aware. I've never had a ticket. I try to be courteous to my fellow road users and give a comfortable ride to any passengers I have with me. What, in your opinion, would driving 10-100x safer than mine look like?

Humans certainly aren't perfect drivers and we have plenty of variation in ability. Things can go wrong, and I'm sure we'd all be happy to see fewer tragedies on our roads. But given the vast amounts of travel we undertake and how many of us do drive, autonomous vehicles will need an extremely good record -- far better than they have so far -- to justify the sort of claim you're making here.


>But that's the problem with self-driving cars. Damn close isn't good enough. A miss is as good as a mile

Maybe close is good enough. The problem as I see it that people usually don't seem to be focused on is that it's impossible for humans to monitor the situation while doing other stuff. You can only do that when you're far away from other things like in a plane or on a boat.

How can we simultaneously believe it's possible to instantly engage with driving and that people can't be trusted to text or make phone calls while driving?


How can we simultaneously believe it's possible to instantly engage with driving and that people can't be trusted to text or make phone calls while driving?

Exactly. Driving while distracted by phones is well-known to be very dangerous, which is why it's against the law in many places. Encouraging drivers who might need to take over in an emergency to zone out and focus on other activities seems unwise for the same reason. This is why the middle levels on the self-driving scale could be very dangerous.


Level 5 isn't the only safe level. Level 4 is safe too - e.g. a car that is fully capable of driving itself without human monitoring in slow stop and go traffic on a highway.

Levels 2 and 3 are the danger zone (and it worries me that car systems have gone ahead from level 1 to level 2, as having the human steer ensures driver attentiveness which is harder to maintain when the car does lane centering for you).


Level 5 isn't the only safe level. Level 4 is safe too

I agree that, by definition, this is necessarily true.

The catch I see is that the same definition is predicated on the vehicle being able to safely end the journey before entering any unsupported situation, without requiring any driver interaction. I'm not aware that we have any known strategy for solving that problem in the general case that would not achieve level 5 anyway.

I acknowledge that in specific situations like geofencing, where a vehicle does effectively operate at level 5 but only under predetermined conditions, that would be level 4 according to the scale. However, it's the ability to operate fully autonomously, albeit within those boundaries, that makes the vehicle safe in this scenario.

So, what happens if external conditions (for example, directions by a police officer, or some sort of road accident or severe weather) mean that the vehicle cannot safely remain within the area where it can operate autonomously? Unlike a vehicle with a human driver, it cannot adapt and safely leave that area either.

In short, unless perhaps we're also going to have a new set of rules and possibly some separated infrastructure for use with level 4 vehicles, I'm not sure they can ever fully match the safety of a human driver without necessarily reaching level 5.


Oh oh I get it, so once the cart is delivered they can go about looking for a horse to drive it.


So then a Covid vaccine is also blocked by regulatory approval. I look forward to teaching my project manager this new definition.


Yes, exactly. Quite a few possibilities are being tested, as in they exist (maybe). It is quite literally being blocked by regulatory approval, with the testing for validity and safety being the approval process.


Nope. They're being tested for validity, meaning that their existence _as a treatment_ is under test. If it turns out they don't have an effect, will you say "we did have a cure for a moment: leeches; but it was rejected by the regulatory process, therefore bad bureaucrats for showing that it didn't work"?

Of course not, that doesn't make sense...if your goal is a cure. (If you're peddling hope, or just looking to make money off quack medicine, OTOH...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: