In the past couple of years, a number of intersecting trends in the automotive and technology worlds have come to be grouped together as mobility. This is not a reference to an IoT-enabled version of those scooters you see people riding at the grocery store but is instead a catch-all covering electric vehicles, self-driving vehicles, and ride-hailing services—either on their own or packaged together. It’s shorthand for a vision of the future where traffic jams and traffic deaths are a thing of the past, as are carbon emissions and maybe even car ownership. Some of that stuff is still decades away from widespread deployment, and a lot of infrastructure—both physical and digital—needs to be built to get us there. One company with a particularly fresh approach to doing that is a startup called Civil Maps.
A car needs to be able to do several things in order to be fully autonomous. First, it has to know exactly where it is, where it’s supposed to go, and the route it needs to take. It ought to know its location to within a few centimeters, because no one likes it when you drive on the wrong side of the road or park on a sidewalk. So we need very accurate maps, ones much more precise than the trusty road atlas or the turn-by-turn directions we now get from the likes of Google and Apple. What’s more, the entire road network—which amounts to more than 2.5 million miles of paved roads in the US—can’t just be mapped once or even once a month. The initial base map has to be updated constantly to reflect potholes and road closures and all the other obstacles that a vehicle might encounter.
Next, the car has to be able to perceive its environment. That ability will require each car to carry an array of sensors, the data from which will be fused together. We’re more risk-averse when it comes to trusting our lives to machines than we are other humans, so plenty of redundancy is warranted. And fusing the input from a mix of sensor types—lidar, radar, optical cameras, and so on—should give the car a better picture of the world around it than we can get from our eyes and ears.
But wait, we’re not done yet. Together, the perception and localization layers then require interpretation. Different road signs need to be understood and obeyed. And it’s not enough to merely recognize (for example) a pedestrian by the side of the road. From experience, human drivers can infer that we should slow down because someone is about to cross the road; an autonomous car should be able to do the same.
Do more with less
As you might guess from its name, Civil Maps—which in true startup fashion was based in a four-bedroom house in Albany, California, until recently—is working on those high-definition maps. It’s not alone in this space. Deepmap is providing HD maps as a service. Obviously Google is at work here. We’ve previously covered Here, now jointly owned by, among others, Audi, BMW, Daimler-Benz, and Intel. And individual OEMs are branching out into cartography, too; General Motors recently mapped the entire US and Canadian highway network to enable its new Super Cruise system. Lacking the deep pockets of a multinational (or three) to pay for a fleet of mapping vehicles and vast storage arrays has forced Civil Maps to take a lean approach to its maps.
“The other companies that do this, they use raster images or point cloud data as their base map layer. And this is very big, and that doesn’t scale well,” Co-founder and CEO Sravan Puttagunta told me. Indeed, the resources required for the task are a primary reason why the first fully self-driving vehicles—which we describe as level 4 autonomy—will be geofenced within a few cities. (Cars with complete autonomy to drive anywhere at any time are at level 5.) “They’re not actually scaling past the geofencing because the data logistics of operating the base map is very, very difficult,” Puttagunta said.
“One of the things that separates us from other companies is the size of our base map, which is very small. It’s actually 10,000 times smaller than our closest competitors,” Puttagunta explained.
The map-making actually involves three different layers; on top of the base map is a vector layer, which describes the shapes, and a semantic layer, akin to the business rules for using it. Those latter two layers are the ones that need to be updated as frequently as possible—near-real time is the goal—and pretty much everyone’s approach to doing that is to crowdsource it. As a self-driving car travels along the road comparing what it sees to its base map, it will upload a delta of the differences back to the cloud. With a sufficient density of cars doing that, you get the coverage you need.
“But what they’re not able to crowdsource is the base map; they have to send a survey car out there, collect the base map data, bring it back to their office, use their cloud stack, to essentially look at any irregularities in the base map, fix them, and then publish a base map on a hard drive which goes into the car,” Puttagunta said.
So the data volumes to create the base map are rather hefty, as they’re built in the cloud from raw 3D lidar point clouds or raster images. Because of the way Civil Maps generates its base maps—using AI to discard all but the relevant sensor data to create an “edge map”—the volume of data uploaded to the cloud is much smaller, so its technology should allow for the crowdsourcing of all three layers in the stack.
Smaller data volumes offer other benefits, as well. At around 200KB per kilometer, it becomes a lot easier to fit an entire country’s roads onto a car’s internal storage. The smaller data volume also means smaller bandwidth bills because the uploads and downloads are smaller. And it’s easier for cars to cache more of that data, which should compensate for the problem of poor connectivity in rural areas (a concern we hear regularly from our audience in relation to autonomous driving). “Especially in rural areas, about 10 to 12 megabytes will be an entire city. So you can fit that into a cache. And when you have updates, those updates can be stored locally on your own cache. Whenever there is connectivity, they’ll be shared with our cloud structure,” Puttagunta told me.
One issue that map makers like Civil Maps and Here are both grappling with is how to create platforms for a heterogenous mix of cars. After all, there is no industry consensus yet for either the mix of sensors to be fused or their location. But at the same time, having multiple walled gardens for different makes of autonomous vehicles is highly undesirable; more cars contributing to the same platform means greater coverage of the road network, not to mention bigger sets for the machine learning algorithms that will process that data. “If a car around the next corner hits the brakes because there’s an obstruction, that information could be used to signal to the drivers behind to slow down ahead of time, resulting in smoother, more efficient journeys and a lower risk of accidents. But that can only work if all cars can speak and understand the same language,” said Dietmar Rabel, Here’s head of autonomous driving product management.
Here’s solution has been to pull together a group of industry players (including Bosch, Daimler, and TomTom) to agree on a common standard for vehicle-to-cloud data. Called Sensoris, this common standard is being coordinated by a European organization called ERTICO. “Defining a standardised interface for exchanging information between the in-vehicle sensors and a dedicated cloud as well as between clouds will enable broad access, delivery and processing of vehicle sensor data; enable easy exchange of vehicle sensor data between all players, and finally enable enriched location based services which are key for mobility services as well as for automated driving,” said Hermann Meyer, CEO at ERTICO.
This kind of sensor agnosticism is important to Puttagunta and Civil Maps as well. It’s not part of the Sensoris platform, but it has developed its own sensor fusion stack. “You want to make the best usage of whatever’s available,” Puttangunta told me, adding that it needn’t just be the expected ones like lidar or cameras. “If there’s like a microphone or some sort of signal from the suspension of the car, all of that would go into the sensor fusion stack.”
His background in image fingerprinting comes into play here. “Fingerprinting is actually a software stack that has the ability to convert raw sensor data into these signatures And then we can use these fingerprints/signatures to find the car’s position and orientation,” he explained.
Civil Maps calls its approach 6D, for six degrees of freedom, as the car will know both its movement axes but also its attitude—otherwise known as pitch, roll, and yaw. With that information, the car can orient itself, and the map can be projected into the field of view of its sensors, which can then focus on specific areas and ignore others. The advantage, the company says, is that knowing where to “look” saves the car time and computing overhead that would otherwise be needed to repeatedly derive a semantic understanding of the world around it and that relationship to the map. This concept is perhaps best explained by the following short video:
Something else you can see in that video is that the company’s augmented reality map was also designed to be easily understood by a human occupant. The free space, occupied lanes, road signage, and traffic lights are highlighted and flagged in a clean and simple format compared to the raw and complex point cloud streams we’ve seen riding in other autonomous vehicles recently. (Although that video is obviously cleaned up a little, you can also see several minutes of raw sensor fusion here.) And as it turns out, data that’s more easily understood by humans is also more easily processed by computers. So instead of using multiple expensive GPUs, Civil Maps can make this work on an ARM Cortex processor.
I’m not alone in having been impressed with this young company’s work. Last year, Ford invested in the company as part of its $6.6 million seed round. It’s definitely one to watch out for in the coming years.