Some interesting research at the Hong Kong Polytechnic University on 3D scanning may offer some very intriguing possibilities.
The researchers at “PolyU” have long been working in the field of 3D scanning and this most recent development is the latest in a string of new techniques.
The problem they’ve investigated is that of human body scanning. There are a number of issues with this practice, not the least of which is that subjects typically are clothed, which obviously confuses the 3D scanner: the surface of the clothes are scanned, not the actual human underneath.
There are other issues in body scanning, such as the lengthy duration required to complete scans with uncooperative and jittery subjects, as well as the enormous cost of current 3D body scanners. We’ve seen systems that cost well over USD$100,000 to perform scans. The cost in such systems is often driven by the need for dozens of imaging elements that trigger simultaneously. This synchronicity eliminates subject jitter, as it captures a split second pose.
The PolyU researchers attempted to circumvent these issues by taking on the challenge in an entirely different way, one that leverages machine learning and the fact that human subjects, by and large, are structurally similar.
Instead of employing dozens of imaging elements, their system requires only a couple of 2D images taken from different angles. This immediately reduces the system cost immensely, as well as the speed to capture the images.
Then the real work occurs in software. The PolyU researchers trained their system against 10,000 subjects to deeply understand the human body shape and structure in various poses. Thus when a subject is 3D scanned in this manner, the resulting 3D model is computed based on prior learnings.
They say the system is able to not only capture the 3D model in less than 10 seconds, but also at the same time derive 50 different measurements, many of which are typically used for clothing measurements.
The implication here is that these measurements could then be input into a system that generates (or “fits”) clothes to a person. Imagine a machine that could quickly produce a perfectly-fitting suit in minutes. This is theoretically possible, given this 3D scanning tech.
While generated clothing isn’t exactly 3D printing, I think there are some very strong implications here, especially when you consider how this 3D scanning approach could be leveraged in other ways.
There are few 3D printed items of clothing, but one is shoes. We’ve seen a few 3D scanners that specialize in capturing foot details, and this collected information could be used by a smart system to generate an appropriate shoe 3D model for production.
But the PolyU technique could be used in many different ways. What would happen, for instance, if instead of training their system on human poses they did so on other 3D objects? Yes, you could do other biological subjects, like pets, plants and so on, but I think there could be mechanical applications as well.
Imagine a smartphone app that captures a few images of, say, a clamp, connector or adapter. If properly trained, this app could determine the nature of the item and reproduce its 3D model, which you could then 3D print. This could be used as a way to 3D print simple replacement parts for workshops, home and office. If not 3D printing them, then at the least you could use this system to identify the part or narrow down a search for the replacement.
This type of “smart” 3D scanning seems quite attractive. It’s very powerful, yet requires little hardware. The only thing holding it back from widespread use is the extensive training required to recognize different forms.
But that needs to be done only once.