This could be the biggest breakthrough in 3D printing in decades: automatic 3D models.
Luma AI has this week released a closed beta test of a new service, āImagine 3Dā, that can automatically generate 3D models from a user-specified text prompt.
Wait, what?
It goes like this: you type in a sentence describing what you want. An example could be: āElephant with raised trunk and two tusks standing on a logā. After a short time, the system returns with a full color, 3D model of exactly that.
Itās pure magic.
The few fortunate Luma AI beta testers that have access are now generating dozens of 3D models that are accumulating in the ācaptureā section of Luma AIās website. They are simply random items. Hereās a few examples:
This one is generated from āA screaming crowā. Just that: “a screaming crow” and no other input was provided. Incredible.
Note that this is an actual 3D model; you can spin it around and see the crow in all dimensions, just as if someone had modeled it in a CAD program.
This one is generated from āA highly detailed bust of Aristotleā. Note that it looks a bit better than it should due to the color texture.
All of these models are exportable in both GLTF and OBJ formats. I chose to download āAristotleā and take a closer look.
After removing the texture, you can see that it really isnāt the greatest 3D model. You couldnāt say that it is āhighly detailedā as per the prompt.
But it is a 3D model. From a text prompt. Which is utterly amazing.
These can indeed be 3D printed, as one participant in the beta test has been doing above.
I had a suspicion this capability would be soon developed after seeing text to image services a few months ago. At the time, I thought it was something āBeyond Generative Designā.
In todayās 3D modeling āgenerativeā refers to the process of specifying mechanical and other constraints and letting an AI generate (or āfigure outā) an optimal 3D solution. What Luma AI has done here is not that at all. There are no constraints; itās just like creating an image from a prompt, except itās a 3D model.
Itās just magic. Oh, I said that already, but it needs to be said again.
I 3D printed Aristotle to see how it came out, and also to demonstrate the end-to-end process of imagining an item and producing it in real life, and it definitely works.
How does Luma AI make this work? It hasnāt yet been described by the company, but itās likely somewhat related to their existing 3D scanning technology, which Iāve been testing extensively.
Their beta 3D scanner is easily the best Iāve used, as it is able to overcome common problems of lighting, reflectivity and other challenging conditions. Their app is also extremely easy to use as it guides you through a specific scanning process in a way no other app does.
Their scanning app takes a number of still images and creates an AI model using āNeRFā technology (Neural Radiosity Field). This model can then recreate a 2D view of a subject from any given angle, even those where no image was taken. Thus you can āspinā a 3D model to see all side.
But from there it would be a simple matter to transform those views into an accurate 3D model. It may be that Imagine 3D injects multiple 2D views of a generated subject image into that same scanning process to create the 3D model.
Regardless of how itās done, it definitely works.
Iām on the waitlist to try out Imagine 3D, and when I get into it, I will provide more details.
In the meantime, it may be time to think about the staggering implications of this technology, but thatās something for another post.
Via Luma AI