
fuuuckyikyak
Thoughts on self driving cars? It’s pretty agreed upon that they have the ability to dramatically reduce car crash fatalities, which are one of the leading causes of death in America. Would you own one? Should they be legal once validated as safe?I just want to say that a big reason to ban people drivers is so all of the self-driving cars can be linked together on the road so they can communicate with each other. I personally don’t think regular driving should be banned but I do want to see a lot of self-driving cars on the road.
1. I wouldn’t own one, I don’t like the idea of not having control over my vehicle on the road. 2. There are proven statistics that self driving cars are safer but there’s also incidents of self driving cars running over people where arguably an able person paying attention wouldn’t. I’d wait for a little bit better tech first. 3. Maybe with the current tech there could be self driving car lines, like with trolleys, that could be cool but I can see other issues that could come up
Tough. I lean more towards #1s position. The problem with automation and self-driving is the fact that you’re giving up control to something that is still inferior in thinking power to a human. Machinery falls apart and breaks. I would rather be in control of that rather than being slammed into a wall at 80mph because the self-driving car’s front driving camera is cracked and the LIDAR loses power
Yeah 100%, a lot of these companies don’t really have an explanation for how they want to support all the drivers they’d layoff. Still the lives that could be saved are hard to ignore. Would you support partial introduction with some protections for drivers or what? Or just no self driving cars?
Computers and software are just far too fragile to trust. Far too easy to manipulate, track, and be taken advantage of. I’d avoid modern cars with the computers they already have in them if I could. We’ve already seen how much of a standstill one company experiencing server outages cause on the Internet. Imagine if it’s the self driving cars’ system servers. Whole world just stops bc some bozo at Google sat on a button and turned off half the earth’s servers? Nah.
That’s scary for sure but what’s the alternative? 102 people dying every day like we have now? Also it’s not like these cars would need a single server, they’d be able to operate independently and with several redundant computers like aircraft have they’d be pretty resilient to computer glitches and hacking
Ofc not. But I think it’s a dangerous line of thinking to entrust our safety with computers. Offloading responsibility to technology will make a society less capable, less educated, and more easily taken advantage of. It’s appealing, the promises of safety, but removes the humanity. Increase safety protocols and education, make it more difficult to obtain (and keep) a license, harsher punishments for driving without one, regulate insurance companies so it’s not a court system fight to get paid
How so? I mean most of the aircraft we fly on are unflyable without their flight computers. That didn’t make pilots any less competent it just made everyone much safer. We just aren’t great at driving. What’s so valuable about driving your own car? Is that value worth the lives lost?
And the reason I’m focusing on safety so much is because we spend a fuck ton of money on traffic safety and it has been terribly slow progress, our roads just keep getting more dangerous. I’d absolutely support more robust human driver safety efforts but that’d mean telling Americans they gotta be ok with smaller cars and they don’t want to buy those
Honestly I don’t fly so can’t help you with that one, and we’ve seen how airlines are going making it impossible to afford and worse conditions to travel. If we aren’t great at driving we need to teach people to be better drivers, not get rid of needing the skill at all by trusting algorithms to drive for us. Plus there comes the age old trolly problem: whose safety do we prioritize? Car owner? Pedestrian? Other drivers?
No particular shade towards you but everyone thinks that. Some people are more correct than others but nobody is perfect. If you spend enough time driving you will get in an accident. Statistically it’s pretty clear that humans as a collective are not good drivers, it’s a really hard thing to do perfectly 100% of the time.
And all that being said I didn’t bring up the idea of laws forbidding humans from driving because I just don’t think that’s realistic for America, even if it were far safer nobody would vote for that. I think if this becomes a thing it needs to show itself as safe and convince people to switch that way, forcing people is obviously wrong and will never work
Sure but those issues are economic, technology wise we’re doing extremely well with keeping air travel safe. The thing is you can teach people all you want, at some point they’ll fuck around with the radio for a second too long without looking up and they’ll get in an accident. Bc you do that 100 times and get away with it so you think it’s safe till it’s not. That’s why I say humans are bad at driving, it’s just impossible to be vigilant all the time.
What makes you say the thinking power is inferior? I think as far as raw computational power goes computers are much better. And again there are cases where a self driving car might crash due to technological failures like a busted camera or LiDAR sensor, but is that enough to negate the huge number of lives saved by avoiding human error, which itself is extremely common?
You’re assuming that aggregate life-saving automatically justifies removing individual agency. That’s a philosophical claim, not a fact. It’s not about choosing to crash, it’s about whether individuals retain responsibility for their actions rather than being compelled to delegate life and death decisions to a system they don’t control.
Step 1: “Why don’t we just make self driving cars” Step 2: “Too complicated, why don’t we reduce decision making by the computer by putting it on a predetermined path” Step 3: “Complications with invisible pathing, why don’t we just put it on a physical track” Step 4: “T̸̖͔̓̀̅r̴̠͖̫͋̍̈́̈́̽ă̵̧̞̭̳̈́i̸̘̮̐n̴͈̈́͊͑͛͝”
Where does control start and end for you then? Most planes you fly in are on autopilot a vast majority of the time. Almost all modern cars brake for you to a certain extent. How much is too much decision making delegated to a system you don’t control? Especially when the control is traded for proven safety improvements?
Autopilot isn’t “self-driving”, it’s automation. Cruise control is a form of automation. And you want to know what both of those have in common? Choice. Modern braking systems constrain execution, they don’t make decisions. They’re tools under supervision, not a replacement for agency. Mandatory autonomy, which is what you’re arguing for, decides when risk is acceptable and does so without your ability to have a say in the matter. The proven safety still doesn’t prove that it outweighs agency.
My point is that it’s on a spectrum. Referring to autopilot as self driving or automation is splitting hairs in my opinion, those systems will make decisions for the pilots in many situations. Although yes I see your point about lesser systems like assisted braking, you do have more choice there.
Can I ask you to elaborate on the value of this agency you’re referring to? If we are looking at any given day of traffic fatalities we’ll have about 100 in the us. Say self driving cars become somewhat widely used (30% or so) and fatalities drop 30%, ie 30 more people are still alive. To me that value is really obvious, those people are alive who otherwise wouldn’t be. What in practical terms is the value of this agency for the other 70% of drivers that day?
Cause I’m sort of confused if you’re speaking in a sort of macro sense about the societal impacts of not being able to assign blame when accidents occur or if you more mean that the ability to personally choose if you’re going to hit a school bus or a minivan in a split second decision is so valuable that it outweighs lives lost. As I’ve laid it out (and tell me if I got it wrong) I just don’t understand that tradeoff
For sure, I’m from Saint Louis originally and my grandma used to be able to take a trolley from the southwest side all the way to the northeast side in Illinois. Now we have one metro line that hardly works. We also are more than capable of building high speed rail to link regional hubs but as beer says cheapish air travel (subsidized ofc) makes that a bit difficult. My point wasn’t that we can’t or shouldn’t pursue rail, just that being a vast rural country we’re always going to need some cars
I’d argue there’s also plenty of instances where humans run people over where arguably an able person paying attention wouldn’t. Computers also don’t get distracted or drink or do drugs before they start driving. It’s definitely worth acknowledging there’s been some unfortunately bad errors but we have to acknowledge humans are nowhere near perfect drivers
I don’t either! It’s been tossed around here a lot but I never said anything about banning human drivers, I wouldn’t advocate for that and I don’t think it’s realistic. However this car to car communication is possible and able to dramatically reduce fatalities even with a majority human driven cars on the road
Yes there are problems we can name with these systems, but we can see already that they’re much safer overall. In all the testing we’ve done (100 million miles from Waymo alone) these cars have never even been close to crashing as often or as severely as human drivers. How much safer do they have to be before the trade off is worth it?
But yes, you can make reasonable comparisons even with a small number of vehicles. 100 million miles is a lot of data and that’s not including all the shared data that these companies exchange. You can compare this to many different datasets of human driven vehicles (taxis, fleet vehicles, government vehicles, newer connected vehicles…etc) to get a useful comparison between the two. There are also scaling methods people use to compare smaller datasets to larger ones with pretty decent accuracy
It’s stupid to day the solution to human error is to just eradicate humans, instead of increasing education and skills to minimize human error. If automated driving becomes the norm, and computer errors become the majority of crashes will you then advocate for the erasure of self driving cars? Giving control back to a population that now has no driving skills as they never needed to learn as the machine did it for them?
ChatGPT makes less grammatical mistakes than people. Should we replace grammar school with language models? Calculators make less mistakes than people, should we get rid of all accounting departments in favor of an automated excel program? Irradiate the judicial branch in favor of an impartial automated law system?
Well again, I never said anything about eradicating humans. Personally I’d prefer an approach where the technology is included in regular cars sometime soonish and you can choose to use it or not use it. Then as people see that it’s safer more and more people would use it. I think you should always need to pass a drivers test to operate any car, self driving or not.
And we’ve done a lot of education. We spend billions every year trying to figure out how to educate and train people the right way. How to ticket and enforce the right way. Ultimately it’s just not something that humans are made to be good at. Never getting in a crash means never losing vigilance even for a second. Thats just not possible for any human being without a decent amount of luck
And your scenario here where glitches account for the majority of crashes just seems like offering up a far fetched theoretical downside to oppose a very real existing problem of 100 people dying every day in car crashes. But yes, if these cars somehow ended up being much more dangerous then of course I’d advocate for them to be pulled from the roads
Well I mean let’s think about the differences in the value of a human as a driver versus a human as a working professional or educator. Humans as drivers have one relatively simple goal which is to move a vehicle in physical space and follow rules. Use physics and quick decision making. Computers are objectively better at that. Just like they’re better at calculating things which is why we don’t do math by hand.
However educating, doing accounting or judicial work…etc is extremely nuanced. Also human working professionals, unlike human drivers, are actually really good at what they do. There’s no benefit to replacing those people with computers, the things they do aren’t done well by computers. They’re entirely different applications
Is….is driving not nuanced? Have all the things we’ve discussed here not been part of the nuance of driving? I think your issue is driving has become so normal and regular to you, you are part of the issue of you are no longer consciously aware of all the calculations you’re doing to drive safely, follow traffic law, keep people safe
Compared to teaching a child or hearing a case in court yes, it is comparatively simple. There are clear sets of rules and if you follow them you will be safe most of the time. In instances where someting unexpected happens, the people who will have the best odds of navigating the situation safely are those who are the most vigilant and can react the quickest. That’s an ideal use case for a computer.
That’s all true but I don’t think any of it is really as big a concern as you’d think. One of the biggest problems is actually “bullying” where people stand in front of the car and harass the people inside, and the car is stuck because it can’t hit the person. The biggest safety concerns with AVs, as compared to human drivers, really lie in the gray areas of what the “ethical” way to drive is (should you hit someone to keep your passengers safe, what should you hit if you have no choice, etc.)