Google says self-driving car hits municipal bus in minor crash
I live in the area. The stinking buses around here ... They crowd the lanes regularly. Us human drivers I suppose are used to it, but it is a serious pain in the butt.
The buses will drive right next to the lane marker, with their mirrors hanging over into your lane. This makes it so everybody has to creep over a little bit into the drivers side lane and hopefully everybody in traffic has a small enough car to deal with it. Otherwise, you have to hang back behind the bus as if it's in your lane and wait for it to get to a stop.
I've had my issues with Google autonomous cars (they drive slow and they used to be exceptionally slow at making right hand turns, causing traffic problems), but in this instance I'm happy to throw VTA under the bus, if you will, and lay blame 100% at their feet.
There is a common assumption that humans = bad drivers. And that these self driving cars will be much better than people at driving. I think this belief greatly underestimates the difficulty of what Google and others are trying to do.
Humans are actually great drivers when the situation requires thinking. Lot of snow and can't see the lane markers? Millions of people adapt every day to this during winter. Lots of pedestrians, bicycles, motorcycles, etc. doing somewhat unpredictable things? Again, look at any Asian mega city, people can adapt very fast. Basically, if the situation requires being alert, people are very good at driving.
When are people bad a driving? Whenever it's monotonous. Bumper to bumper traffic or a free flowing freeway. Constant repetition of red/green cycles while going down a suburban street. These boring situations make a lot of people basically turn off their alert thinking. Then they do other things like texting, talking on phone, etc.
And these boring situations are exactly where AI self driving is better at driving than people. Computers never get bored. The self driving car will be at 100% attention even during the most boring traffic. But when boring suddenly turns to not boring? The current state of AI is very very bad at this.
And unfortunately there's no easy way to use the best of both sides. If the AI is fully driving, then it will do great while the driving is boring. But by the time the AI decides, this is too much, can't handle this, it's too late to alert the human driver to take over. But if the human driver is required to always pay attention, what's the point of the self driving car?
Self driving cars will get there someday, but I think it's much farther away than many assume it will be.
And here is the actual accident report [0]
"A Google Lexus-model autonomous vehicle ("Google AV") was traveling in autonomous mode eastbound on El Camino Real in Mountain View in the far right-hand lane approaching the Castro St. intersection. As the Google AV approached the intersection, it signaled its intent to make a right turn on red onto Castro St. The Google AV then moved to the right-hand sid of the lane to pass traffic in the same lane that was stopped at the intersection and proceeding straight. However, the Google AV had to come to a stop to go around sandbags positioned around a storm drain that were blocking its path. When the light turned green, traffic in the lane continued past the Google AV. After a few cars had passed, the Google AV began to proceed back into the center of the lane to pass the sand bags. A public transit bus was approaching from behind. The Google AV test driver saw the bus approaching in the left side mirror but believed the bus would stop or slow to allow the Google AV to continue. Approximately three seconds later, as the Google AV was reentering the center of the lane, it made contact with the side of the bus. The Google AV was operating in autonomous mode and travelling less than 2 mph, and the bus was travelling at about 15 mph at the time of contact.
The Google AV sustained body damage to the left front fender, the left front wheel and one of its driver's-side sensors. There were no injuries reported at the scene."
[0] - https://www.dmv.ca.gov/portal/wcm/connect/3946fbb8-e04e-4d52...
Ok, I laughed out loud on that one. This quote in particular, "The vehicle and the test driver 'believed the bus would slow or allow the Google (autonomous vehicle) to continue.'"
Bus drivers in the Bay Area are notorious for ignoring traffic (and pedestrians). Apparently there is some indemnity or statutes that make suing either the transportation agency or the driver nearly impossible and so pretty quickly people learn that the bus drivers drive with impunity. Plenty of stories posted to the local traffic column in the paper, and shared amongst neighbors and in the department of safety's "blotter" feature.
Google needs to go back and program their cars to always assume that buses are out to get them and avoid them at all cost, They are an active traffic hazard often operated by a disinterested and distracted driver. The only way to "win" is to not be where ever the bus is.
It was inevitable, so I'm sure they're quite pleased it was a minor issue and not something catastrophic. Someday in the future a self driving car is going to hurt or kill a person and then the real legal tests will begin, but this is the first step on the pathway to normalcy.
My personal fear is that Google and maybe one or two others will get self driving cars right, but then the imitations from other manufacturers will fall short. The liability needs to end up on the manufacturer of the self driving car system, this is not something to be taken lightly at all.
Here's the Autonomous Vehicle Accident Report filed with the CA DMV.[1] "The Google AV was operating in autonomous mode and traveling at less than 2 mph and the bus was traveling around 15 mph." The other vehicle was a 2002 Newflyer Lowfloor Articulated Bus, which is 61 feet long including the "trailer" part.
Here's where it happened.[2] You can see traffic cones around the storm drain.
This is a subtle error. Arguably, part of the problem was that the AV was moving too slowly. It was trying to break into a gap in traffic, but because it was maneuvering around an unusual road hazard (sandbags), was moving very slowly. This situation was misread by the bus driver, who failed to stop or change course, perhaps expecting the AV to accelerate. The AV is probably at fault, because it was doing a lane change while the bus was not.
Fixing this requires that the AV be either less aggressive or more aggressive. Less aggressive would mean sitting there waiting for a big break in traffic. That could take a while at that location. More aggressive would mean accelerating faster into a gap. Google's AVs will accelerate into gaps in ordinary situations such as freeway merges, but when dealing with an unusual road hazard, they may be held down to very slow speeds.
I wonder if Google will publish the playback from their sensor data.
[1] https://www.dmv.ca.gov/portal/wcm/connect/3946fbb8-e04e-4d52... [2] https://goo.gl/maps/QzvVXQGxhX72
This story is getting a lot of press coverage. Reuters, CNBC and Wired are already covering it.
When Cruise (YC 14)'s car hit a parked car at 20 mph in SF last month, there was no press attention.[1] Even though it was across the street from the main police station.
That Cruise crash is an example of the "deadly valley" between manual driving and fully automatic driving, The vehicle made a bad move which prompted the driver to take over, but too late. This is exactly why AVs can't rely on the driver as backup.
[1] https://www.dmv.ca.gov/portal/wcm/connect/bc21ef62-6e7c-4049...
> The vehicle and the test driver "believed the bus would slow or allow the Google (autonomous vehicle) to continue."
Clearly the algorithm does not take into account the classic attitudes of bus drivers.
Technically, even if self driving cars are more safe than human drivers (even if they're not perfect), should be good enough. But my lizard brain tells me that I'm putting my life in the hands of a machine that potentially has bugs, and that's a little scary.
Most of us are going to expect nothing short of perfection from these machines to really trust them.
In related article http://www.reuters.com/article/us-google-selfdrivingcar-idUS...
"Google said in the filing the autonomous vehicle was traveling at less than 2 miles per hour, while the bus was moving at about 15 miles per hour."
Google said in a statement on Monday that "we clearly bear some responsibility, because if our car hadn’t moved, there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that."
> The vehicle and the test driver "believed the bus would slow or allow the Google (autonomous vehicle) to continue."
Love this. I'm shamelessly rooting for self-driving cars, and crashes are inevitable. Having the human in the car agree with the computer brings a lot of credibility to the report and follow-up.
I'm actually more impressed that they are trying to code "believed the bus would slow".
Understanding the expected behavior of other drivers is critical to making self driving cars work. And that seems like a pretty hard thing for a computer to figure out.
If you're having trouble visualizing what happened (like me), here's the intersection: https://www.google.com/maps/@37.3859058,-122.0843836,3a,75y,...
Looks like the far right lane has double the normal lane width to accommodate on-street parking. The Google car, looking to turn right on red, attempted to use the right-most part of the lane to pass cars that were waiting to go straight, but there were sandbags covering the storm drain near the corner, so it had to stop. When the light changed, the cars it had passed continued on while the Google car waited for a gap, which ended up being in front of a bus, likely caused by the bus accelerating more slowly than the cars in front of it. The Google car tried to use the gap to get around the sandbags, assuming that the bus wouldn't just plow into it, but the bus plowed into it. Perhaps the bus driver assumed the Google car was parked and wasn't paying much attention to it.
It is going to be difficult to predict the actions of irrational actors like bus drivers. You can usually assume that a driver has a vested interest in not damaging their vehicle, but my experience navigating California's city streets has consistently suggested otherwise when busses are involved.
I would group them into a category along with police cars and ambulances due to the prioritization of speed over safety. (Although, in my experience ambulances, are usually very good a prioritizing safety.)
The linked report doesn't say Google is accepting responsibility, so it'll be interesting to see what tact they take here. In _Toronto_, from talking with acquaintances that are drivers, I understand they work in near constant fear of accidents. They are expected to at least practically always be able to avoid accidents, and the feeling is that if something happens, they're presumed guilty (within management), and proceed from there.
I don't know if all professional bus drivers' conditions are the same as Toronto, but I can imagine this could be extremely distressing for the bus driver involved. At least nobody was injured.
The description of events is slightly suspicious. For all the predictive smarts of the car and its impressive LIDAR, keeping from bumping into things seems like it would be the highest priority possible second only to loss of life. The corners of the car have some kind of distance sensor so the car would have to have known it was about to collide with something. In CPU time there was plenty of time to react and its always looking at all four corners.
The article make it sound like the AI blindly changed lanes into the bus. It seems most likely that the AI knew about the impending collision and decided colliding was safer than any other option. It'd be great to know what the other options were, but I imagine we'll probably never get more detail.
Maybe Google should train their cars at demolition derbies instead of on public roads. Have one group of vehicles trying to crash into another. And let competing teams of students program the crashers. Winning team gets $25K.
And play with touch football rules to keep the costs down. Cover vehicles with touch sensors and cameras that record each crash.
Is there any information available about how these cars take evasive action or attempt to reduce damage in the event of an imminent impact? The article's wording makes it sound like the car was at fault for the crash by re-entering the lane:
> "But three seconds later, as the Google car reentered the center of the lane it struck the side of the bus"
Presumably the car's software predicted the crash before it occurred but was unable to completely avoid it. I'd love to know what the programmed behavior is in these 'unavoidable' crash scenarios. edit: formatting
Can the URL be changed to a source that at least links to the report, like http://www.theverge.com/2016/2/29/11134344/google-self-drivi... or http://www.wired.com/2016/02/googles-self-driving-car-may-ca...
undefined
This incident hints at what I believe is going to be a big issue in the transition to autonomous vehicles in busy urban areas. If AVs are too passive, human drivers will take advantage of them. If they are too aggressive, the accident rates will be high, and the risk of an AV causing a major accident increases. Threading this dichotomy is more difficult than solving the physics of driving.
The country where I live buses have right of way when leaving bus stops. They seem to actually try to hit cars which are in the next lane (even when the car has almost passed and it of course isn't a "right of way" situation anymore)... So if you program an autonomous car it would be smart to program it to never get next to a bus :)
The vehicle and the test driver "believed the bus would slow or allow the Google (autonomous vehicle) to continue."
If the test driver stated that s/he thought the car should yield to the bus, would the test driver still have a job? I would have been shocked if the test driver said the software was at fault.
"The vehicle and the test driver "believed the bus would slow or allow the Google (autonomous vehicle) to continue.""
This sounds like a very strong assumption. When was the last time you saw a bus slow down for a car or anything much? There's a reason "Fuck you I'm a bus" is a meme.
undefined
My question is - why are bus drivers such assholes (as depicted by the comments here)?
Serious question I'm curious about. Is this societal, is it a technology problem (do the way buses work somehow encourage this behavior towards other motorists), is it likely a combination of both?
I hate El Camino road. The right most lane is always far too wide which means if a bus has stopped everyone expects you to still squeeze from the limited space. If you don't they will honk. If you do there is always a possibility that the giant bus might miss you and hit you.
However an important take away for all bay area drivers from the article is
You are supposed to HUG the right shoulder when making a right turn. I am yet to see a single driver who embraces that principle. It is not just for your safety and convenience of the traffic behind you but also because of the safety of bike riders, if you make a sudden right turn they might hit you and get injured.
I think this minor accident is actually a good thing for self-driving cars. It sets the expectation that they may not be perfect, and isn't a PR disaster in the way that a fatal accident may have been.
From here, more accidents are able to happen without being huge news stories. Undoubtedly, the first time someone dies in an accident involving a self-driving car, there will still be lots of questioning of the technology, but it won't come as a complete surprise, now that smaller accidents have occurred.
I don't know in US, but at least where I live when somebody hits you from behind, it's his fault. The idea is that you are supposed to drive cautiously and therefore be prepared if the vehicle in front of you behave erratically -- that is, even if the vehicle's driver is wrong (say, it's not his right of way), you still should be in a safe distance.
Google needs to program their AVs to be aware of the universal "law of anything bigger"!
http://vignette2.wikia.nocookie.net/headhuntersholosuite/ima...
Seems like this was inevitable. Its astounding to me that it took this long and that there was no one hurt!
It is even more amazing that there have been apparently 0 injuries or fatalities. I wonder how these autonomous cars compare in terms of hours on the road to # of incidents (not accidents mind you, I hate that term).
So what happened when the bus driver walked over to the self-driving car in order to exchange insurance information?
undefined
It would be great if it turned out google car swerved into the bus to avoid hitting a fat man.
undefined
It's interesting to observe the highly varying amount that other cars will allow for merging. Merging onto the freeway during a busy day (maybe 35MPH traffic), I tried to merge and the car I was planning on going in front of moved up until their bumper was about a meter from the vehicle in front of them and leaned on their horn. "I technically have the right of way in this situation and by God, I won't yield it to you!"
This was a situation where there were about 40 cars merging onto the freeway and they mostly just do a fairly standard zipper merge.
undefined
That was not a small target...
skynet?
Instead of self driving car they should have self driving bus
No it's the municipal bus that hit the Google car. The municipal buses have become increasingly reckless
A self-driving car bng tested by Google struck a public bus on a Silicon Valley street, a minor accident that appears to be the first time one of the ... While the report does not address fault, Google said in a written statement, “We ... http://getfollowerslikes.co
This just in ... one vehicle on the road hit another vehicle on the road.
skynet?
undefined
Uber + Self-driving = Mayhem
Some of these discussions seem to severely underestimate human intelligence and the ability to reason and take decisions rapidly.
There are hundreds of millions of cars operating in all sorts of conditions with little incident. There are literally hundreds of thousands of preemptive actions, foresight, experience and instinctive actions at play for every possibility on roads that millions of people adopt easily.
To say that's not good enough one must articulate a system that is clearly better or has the potential while avoiding a broad brush with 0.1% incidents or diminishing human drivers. That's an argument of convenience and could reflect a lack of understanding of the scope and scale of the problem. This is just sandbags, there are literally millions of obstacles and scenarios negotiated without incident everyday.
Crashes in snow is not always about bad decisions. It could also be extremely poor conditions that cars should not be in, inadequate vehicle or tyres and computing will not help if hardware is deficient. Presuming the worst of others or jumping to conclusions about their intelligence levels is unsavory and dangerous if used to push something.
An AI vehicle in any crowded Asian city is going to be literally stranded by indecision. On relatively empty and organized roads the computing power needed will barely scratch the surface of what a proper self driving AI system needs to be unless you redesign the roads and place constraints, which then becomes a different discussion to approach carefully.