The huge, unseen operation behind the accuracy of Google Maps

Wired:

Street View, which launched in 2007, was conceived as a way to improve the user experience by letting people see what the area around their destination looked like, says Brian McClendon, Google Maps VP. “But we soon realized that one of the best ways to make maps is to have a photographic record of the streets of the world and refer back to those whenever there’s a correction,” McClendon said.

And as the data collected by Street View grew, the team saw that it was good for more than just spot-checking their data, says Manik Gupta, group product manager for Google Maps. Street View cars have now driven more than 7 million miles, including 99 percent of the public roads in the U.S. “It’s actually allowing us to algorithmically build up new data layers from information we’ve extracted,” Gupta said.

Those algorithms borrow methods from computer vision and machine learning to extract features like street numbers painted on curbs, the names of businesses and other points of interest, speed limits and other traffic signs. “Stop signs are trivial, they’re made to stick out,” McClendon said. Turn restrictions—which directions you can turn at a given intersection—are a big deal for navigation, but they’re trickier to capture with algorithms. Sometimes the arrows that tell you which turns are legal are painted on the road, sometimes they’re overhead. They can be different colors and sizes. “Lane markers are harder because they’re not consistent, but we’re getting much smarter about that,” McClendon said.

Beyond the algorithms is an application called Atlas that lets an army of Google map workers fine tune the data.

It’d be interesting to learn about Apple’s approach, see how far down this road that team has travelled. I’m wondering if Apple has learned from Google’s approach or if they created their own methodology from scratch.