Inside Waymo's Secret World for Training Self-Driving Cars
There's a lot of misunderstanding about self-driving. Mostly because nobody is publishing much.
If you want to do it right, you start with geometry. The first step is capturing range imagery and grinding it down to a 3D model of the world. This tells you where you physically can go. That's where we were at the DARPA Grand Challenge over a decade ago.
Then comes moving object popout. What out there isn't a stationary obstacle? That comes from processing multiple frames and range rate data from radars. Moving objects have to be tracked.
Only then does object recognition come into play. Moving objects should be classified if possible, and fixed objects of special interest (signs, barricades, etc.) identified. Not all objects will be identified, but that's OK. If it can't be identified and it's moving, you have to assume the worst case - it's vulnerable and it doesn't follow the road rules. This will force a slowdown and careful maneuvering around the unknown object. (See Chris Urmson's SXSW video of their vehicle encountering someone in a powered wheelchair chasing a duck with a broom.)
Predicting the behavior of moving objects is a big part of handling heavy traffic. That's what Google is working hard on now.
Machine learning is a part of this, but not the whole thing. You can't really trust machine learning; it's a statistical method and sometimes it will be totally wrong. You can trust geometry.
If you want to do it wrong, you start out with lane keeping and smart cruise control, tack on recognition of common objects, throw in some machine learning and hope for the best. This produced the Tesla self-crashing car, noted for running into unusual obstacles which partially block the lane and for trusting lane markings way too much. Look closely at the videos from Tesla and Cruise, slowing them down to real time. They speed them up so you can't easily see how bad the object recognition is.
Really well-written and engaging article. It seems the main point of an article like this from Google's perspective is gaining the public's trust. I'm sure most people are worried about the edge-cases that are so hard to run into in the real world, but are so important to get right. This is Google saying "we may not see that edge-case exactly, but we know when it could happen and simulate the situation a million times to make sure we get it right if we ever see it again."
Now this is doing it right. That's how car companies do it. Big test tracks, and years of test track time, plus simulation and test rigs.
As the article points out, Google has far more autonomous driving miles than everybody else put together. They're on the hard cases, too; not just driving on freeways.
I'm disappointed the article wasn't a bit more skeptical of some of the claims. Certainly the simulation-based testing is a good thing, but stats about how many billions of simulated miles have been driven can create a self-reinforcing delusion if everyone involved isn't careful to remember that the simulations can only work with well-known and expected situations. It sounds like Waymo realizes this and is building a huge library of scenarios to evaluate, but one million simulated miles are not worth a hundred real miles in terms of confidence in the system, and that is not the message this article portrays.
Overall this article does give hints though that autonomous vehicles are truly much further away than anyone wants to admit. The story about being flummoxed by multilane roundabouts is depressing. If they didn't know such things existed then they were incredibly sloppy in their data gathering. If they truly thought scaling from a simple roundabout to a multilane one would be trivial then their staff don't have the right mindset for this problem space.
Also note that all the testing is happening in flat, desert landscapes where there are no weather or lighting challenges. Their pictured model residential street only has stubs of driveways. No houses, no trees. I'm sure they know these are gaps but I worry they underestimate the challenges of adapting to entirely different driving environments. Especially when machine learning is involved and you're simulating 99% of your mileage...
Looking forward to checking back in 2040 though.
> But Peng also presented the position of the traditional automakers. He said that they are trying to do something fundamentally different. Instead of aiming for the full autonomy moon shot, they are trying to add driver-assistance technologies, “make a little money,” and then step forward toward full autonomy. It’s not fair to compare Waymo, which has the resources and corporate freedom to put a $70,000 laser range finder on top of a car, with an automaker like Chevy that might see $40,000 as its price ceiling for mass-market adoption.
> “GM, Ford, Toyota, and others are saying ‘Let me reduce the number of crashes and fatalities and increase safety for the mass market.’ Their target is totally different,” Peng said. “We need to think about the millions of vehicles, not just a few thousand.”
I wonder if an easier task to reduce fatalities is to target impaired driving and develop a detection mechanism that tries to identify whether the driver is impaired. It seems like an easier and more localized problem than building a fully autonomous vehicle.
Looks like this is the location of the test facility they talk about... https://www.google.com.au/maps/@37.3705986,-120.5747932,237m... Interesting choice of name for their Expressway...
EDIT: Apple maps has up to date satellite imagery https://maps.apple.com/?q=37.3718,-120.5749&t=k
I found this to be especially interesting "She spent countless hours going up and down 101 and 280, the highways that lead between San Francisco and Mountain View. Like the rest of the drivers, she came to develop a feel for how the cars performed on the open road. And this came to be seen as an important kind of knowledge within the self-driving program. They developed an intuition about what might be hard for the cars. “Doing some testing on newer software and having a bit of tenure on the team, I began to think about ways that we could potentially challenge the system,” she tells me."
Clearly the progress being made in the area of self-driving cars is undeniable. Seeing the articles and discussions crop up on a daily basis are making me wonder if we are headed for an AI Winter scenario in this area within the next decade or if this is the real deal and we will see self-driving cars at dealerships within 20 years.
This is so far from being my area of expertise. Just one observer's thoughts/questions. Insane how far we have come so quickly, at any rate.
Self driving cars is based on machine learning which is basically processing massive amounts of past data. This is great for routine situations and it seems Waymo is making good progress covering most of these. However the key weakness is the lack of true intelligence. When anything unusual or unexpected happens, the best it can do is simply safely shutdown and wait for a human to intervene. And the car can't really analyze and understand its environment because that requires intelligence.
It seems a best of both worlds solution would be a new type of job: Data gathering driver. It would be basically like the Google Streetview cars, but with much more sensors and input from the driver. These drivers would be assigned neighborhoods to drive through and note anything unusual or noteworthy. Stop sign is down because of last night's storm? Driver notes this and now all Waymo cars know. Street closed due to construction? All Waymo cars will avoid that street. This type of data requires intelligence to analyze and understand and it would be fed into the Waymo system for all self driving cars to benefit from it. There could be millions of self driving cars and a few thousands of these drivers feeding daily updated data to the system. You could even increase the frequency of the drivers depending on weather conditions. Maybe if there's a storm, the drivers keep driving in a loop hourly instead of once daily.
Recently, Waymo and Lyft launched a self-driving vehicle partnership [0]. With Trump nomination of Derek Kan (Lyft General Manager in Southern California) to serve as Under Secretary of Transportation for Policy [1], would it ease Waymo path towards autonomous car supremacy?
[0] https://www.reuters.com/article/us-lyft-waymo-collaboration-...
[1] https://www.whitehouse.gov/the-press-office/2017/04/06/presi...
Unless I didn't understand correctly, on the animated image of the car turning, which included a wireframe representation of the scene, as well as the cameras: https://cdn.theatlantic.com/assets/media/img/posts/2017/08/W...
The generated geometry doesn't seem to include the Bike that quickly passes behind the car at the intersection ...
> for an average of once every 890 miles, or 0.80 disengagements per 1,000 miles.
Isn’t that 1.12~ per 1000mi?
Interesting stuff. I wonder to what extent Google and Tesla could benefit from sharing each others datasets, Tesla has far more real world data than Google at this point in time but Google has the better virtual environment to test in.
undefined
The fact that Waymo revealed their "secret" tools for advancing this crucial technology implies that either:
1) They believe no one can quite catch up before they can launch the technology. Since they know several competitors have huge resources and brilliant people, it means they are quite close to launch.
2) These tools have become open secret within the industry, so no harm is done to their competitive position by revealing it to the public, only good PR to be gained perhaps to attract more bright engineers.
I suspect 2) is more likely since several key players have moved around so much in the past couple of years. Relatively high-level knowledge about how autonomous vehicles are being developed at Waymo might have become well-known within the industry by now.
Great to see such progress made by Waymo and Co. But as the Murphy's law says, if something might break, it will eventually break. With possibly millions of self driving cars and trillions of unique situations, its inevitable that someone will get hurt. So my question is: what kind of progress is being made to draft a legal framework for situation in which I rode my bike in bike lane and for whatever reason self-driving vehicle struck me. Who do I go after in regards to my pilling up medical expenses? Do I sue driver? The car company? The company who delivered self-driving hardware? The software manufacturer? I get that self-driving cars comes with incredible advantages: less crashes (hopefully), no drink drivers, etc. But a legal framework for such automobiles should be in works as we speak.
Edited: of course Murphy's law; thanks for pointing it out!
That's pretty fucking cool.
Machine Learning for Self-Driving Cars. High-level Development Process for Autonomous Vehicles. https://www.slideshare.net/jwiegelmann/machine-learning-for-...
undefined
Recently got turned down for a job here, now I'm remarkably glad I did - this is an incredibly difficult problem to tackle, and they need absolutely brilliant people to achieve success.
I wish them the best of luck in their continued progress! I'll stick to easier stuff :)
undefined
Waymo should just get training data in India.
Can they simulate a hurricane?