General Motors has acquired Strobe, a lidar startup that could give the giant automaker a leg up in the race to make self-driving cars a mainstream technology. Kyle Vogt, founder of the self-driving car startup Cruise (which GM acquired last year), announced the acquisition in a Monday blog post.
Lidar—short for light radar—is widely seen as a key sensor technology for self-driving cars. By sending out laser pulses and measuring how long it takes for them to bounce back, lidar builds a detailed 3-D map of a car’s surroundings.
The first generation of automotive lidar sits on top of the car, spinning around to collect a panoramic 360-degree view of the vehicle’s surroundings. These mechanical systems have worked well enough for building self-driving car prototypes, but their complexity makes it hard to achieve the low cost and durability required for the mass market.
Strobe is one of many startups that has been trying to develop re-designed lidars that are cheap and durable enough for mainstream commercial use. Strobe hasn’t revealed how its technology works, but we can make an educated guess by looking at the academic research of Strobe board member John Bowers. Bowers is a professor in the electrical and computer engineering department at the University of California, Santa Barbara, and he has spent years researching how to pack the key elements of a lidar sensor onto a silicon chip.
Two papers in particular provide an in-depth look at how to build lidar for the mass market. The first, published in 2015, explains how to build a laser capable of being aimed in two dimensions without any moving parts. The second, published last year, provides an overview of how to combine this technique with others to build a “lidar on a chip”—a key step toward building lidars that cost hundreds of dollars rather than thousands.
Three types of solid-state lidar
The story of lidar for self-driving cars goes back to 2005, when David Hall, founder of an audio equipment company called Velodyne, decided to participate in DARPA’s second self-driving car competition. His car didn’t win, but competitors noticed the custom lidar he’d built for the competition. By the time of DARPA’s third competition in 2007, Velodyne’s lidars could be found on several of the vehicles that successfully completed the challenge. Velodyne’s lidars have been an industry standard ever since.
Hall’s design was conceptually simple but technically challenging to manufacture. Hall mounted an array of lasers on a spinning gimbal. The contraption spins around several times per second, collecting distance data to objects all around the vehicle.
The 360-degree view was helpful, but this design—which is still widely used today—has some significant drawbacks. For one thing, the precision mechanical parts and dozens of lasers in the early Velodyne units were expensive. The Velodyne lidar Google used for its original self-driving car in the early 2010s cost around $75,000. Since then, Velodyne has built smaller, simpler spinning lidars that go for around $8,000 apiece, but that still may be too expensive for mass adoption.
It’s also not clear if this kind of mechanical lidar can withstand the rigors of everyday use. Consumers expect their cars to drive for hundreds of thousands of miles in a variety of climates and road conditions.
Many experts believe the solution is to build “solid-state” lidars that work without having to physically spin the lasers around. A number of companies—including Velodyne itself—have been working to develop solid-state lidars that sell for under $1,000. These lidars are fixed in one place and usually have a much narrower field of view, requiring several lidars to get the same 360-degree visibility provided by a rooftop device. However, these devices are expected to be much cheaper, so it should be possible to buy several solid-state lidars and still save money over the cost of a spinning lidar.
The key challenge for a solid-state lidar is to find a way to shine light in different directions without physically moving a laser around. Some companies, including the German chipmaker Infineon, have built lidars around a micro-electro-mechanical system (MIMS). A tiny mirror, millimeters across, rotates along two axes, directing a fixed laser beam as it scans the scene.
A second approach, known as flash lidar, dispenses with scanning altogether. Instead, it illuminates an entire scene with a single flash, then uses a two-dimensional array of tiny sensors to detect light as it bounces back from different directions.
A big downside to this approach: because it disperses light more widely, it can be difficult to detect objects that are far away or have low reflectivity.
Laser scanning with no moving parts
The systems constructed by Bowers at his lab at the University of California, Santa Barbara, took a third approach, achieving MEMS-like scanning capabilities without using any mechanical parts—even tiny ones. Their approach is described in a 2015 paper, “Fully integrated hybrid silicon two dimensional beam scanner.”
Bowers and his UCSB colleagues used one technique to aim the laser up and down and a different technique to point the laser from side to side. For the first dimension, the UCSB team used a technology called optical phased arrays. A phased array is a row of transmitters that can change the direction of an electromagnetic beam by adjusting the relative phase of the signal from one transmitter to the next.
If the transmitters all emit electromagnetic waves in sync, the beam will be sent out straight ahead—that is, perpendicular to the array. To direct the beam to the left, the transmitters skew the phase of the signal sent out by each antenna, so the signal from transmitters on the left are behind those of transmitters on the right. To direct a beam to the right, the array does the opposite, shifting the phase of the left-most elements ahead of those farther to the right. Wikipedia has a helpful illustration of how this works:
This technique has been used for decades in radar systems, where the transmitters are radar antennas. Optical phased arrays apply the same principle for laser light, packing an array of laser emitters into a space small enough to fit on a single chip.
In theory, you could build a two-dimensional optical phased array to create a laser that can be aimed along two different axes. But Bowers and his co-authors argue this isn’t practical. If a one-dimensional phased array required n transmitting elements (32 is a typical number) then a two-dimensional phased array would need n-squared elements (1,024, in this example). That’s a big waste of silicon.
Instead, Bowers and his colleagues achieved the second dimension of aiming by varying the frequency of laser light and then passing the light through a grating array that—like an old-fashioned prism—directs light in slightly different directions depending on its color.
Hence, the UCSB team built a laser that can be aimed in two dimensions—up and down, left and right—without any mechanical parts. And they figured out how to embed this whole contraption onto a single chip that’s less than a square centimeter in area.