Last week Apple introduced a new iPad equipped with LiDAR, and this week we learned a bit more about it.
I was extremely excited about this development, because it implied that we would soon see much software for the iPad dedicated to 3D scanning. LiDAR can be a high-resolution mechanism, and the prospect of being able to carry it around wherever you go was quite enticing.
This week we learned a bit more about the mysterious LiDAR sensor in the new iPad Pro, courtesy of the folks at iFixit.
iFixit Disassembly
iFixit is dedicated to determining the “fixability” of Apple and other popular products. Their website provides a number of repair guides for different models, all based on their uncanny core ability to disassemble the impossible.
Apple and other manufacturers have in recent years made disassembly of their products incredibly challenging. It’s not clear if this is a strategy to defeat the DIY fixers, or a side effect of modern manufacturing and design processes. Nevertheless, it’s fascinating to watch iFixit break into a device using surprising methods.
iPad Pro Tear-down
In the iPad Pro video, iFixit tears down the device to basic components and determines their nature. This provides insight into the device beyond that provided by Apple. One of the components inspected in this tear-down was the LiDAR sensor.
At top you can see the new iPad Pro’s camera module. It contains two regular imaging cameras, each with different focal lengths. The iPad camera software performs some magic to combine them together to create “smart images” that can be manipulated in interesting ways.
But on the right is the new LiDAR sensor.
A key question that I and others have been wondering about is the resolution of this sensor. Based on iFixit’s experiments, it seems the LiDAR sensor is actually of rather poor resolution.
Apple iPad Pro LiDAR Resolution
Here is an image of the infrared beam pattern sprayed out by an iPhone’s facial recognition system. Each light dot represents a “pixel” for which distance information is returned. By taking the distances of all pixels, software is able to reconstruct a 3D model of the subject.
This is the same subject, but illuminated by the normally invisible infrared beams from the new LiDAR sensor. As you can see, there are significantly fewer dots on the subject and background.
This suggests the resolution of the LiDAR sensor is quite poor. Imagine, for example, trying to reconstruct the subject in the image, a head sculpture, using only information from the dots that strike it. Now imagine trying to reconstruct the subject using the more plentiful iPhone dot matrix. I think you get the idea.
Apple LiDAR Plans
Why would this LiDAR scanner have such poor resolution? It seems they have designed it for a specific purpose: room scanning. This aligns with their AR strategy, where items in a nearby scene will be mixed with virtual items, and the software must have a basic idea of how far away things are in order to place them properly.
Thus it seems the LiDAR sensor would make a poor 3D scanner.
Or would it?
A 3D model is generated based on many scanned points that are used to develop a point cloud. In a scanning scenario, points are continuously gathered as the scanner moves across the scene. In other words, the static images of the point pattern above are only a single frame of many, perhaps thousands, that might be captured for a subject.
The idea is to have the scanner operator gradually sweep the dot pattern across each area of the subject. When the sensor projects more dots, this is easier, and harder when there are fewer dots. In the case of the iPad Pro, it may be possible to obtain a decent 3D scan by simply sweeping the beams across the subject many more times than expected.
Could that approach really work? I’m not sure it’s practical, but you can bet several 3D scanning companies are trying it out right now.
Via iFixit