The Biggest Breakthrough in 3D Printing: Automatic 3D Models from Luma AI

By on December 15th, 2022 in news, Service

Tags: , , , , , ,

Cute Santa Dog 3D model entirely generated from a text prompt [Source: Luma AI]

This could be the biggest breakthrough in 3D printing in decades: automatic 3D models.

Luma AI has this week released a closed beta test of a new service, “Imagine 3D”, that can automatically generate 3D models from a user-specified text prompt.

Wait, what?

It goes like this: you type in a sentence describing what you want. An example could be: “Elephant with raised trunk and two tusks standing on a log”. After a short time, the system returns with a full color, 3D model of exactly that.

It’s pure magic.

3D models entirely generated from a text prompt [Source: Luma AI]

The few fortunate Luma AI beta testers that have access are now generating dozens of 3D models that are accumulating in the “capture” section of Luma AI’s website. They are simply random items. Here’s a few examples:

This one is generated from “A screaming crow”. Just that: “a screaming crow” and no other input was provided. Incredible.

Accurate crow 3D model entirely generated from a text prompt [Source: Luma AI]

Note that this is an actual 3D model; you can spin it around and see the crow in all dimensions, just as if someone had modeled it in a CAD program.

Aristotle 3D model entirely generated from a text prompt [Source: Luma AI]

This one is generated from “A highly detailed bust of Aristotle”. Note that it looks a bit better than it should due to the color texture.

All of these models are exportable in both GLTF and OBJ formats. I chose to download “Aristotle” and take a closer look.

Aristotle 3D model entirely generated from a text prompt with texture removed [Source: Luma AI]

After removing the texture, you can see that it really isn’t the greatest 3D model. You couldn’t say that it is “highly detailed” as per the prompt.

But it is a 3D model. From a text prompt. Which is utterly amazing.

Leopard 3D model automatically generated, and being sliced for 3D printing [Source: Luma AI Discord]

These can indeed be 3D printed, as one participant in the beta test has been doing above.

I had a suspicion this capability would be soon developed after seeing text to image services a few months ago. At the time, I thought it was something “Beyond Generative Design”.

In today’s 3D modeling “generative” refers to the process of specifying mechanical and other constraints and letting an AI generate (or “figure out”) an optimal 3D solution. What Luma AI has done here is not that at all. There are no constraints; it’s just like creating an image from a prompt, except it’s a 3D model.

It’s just magic. Oh, I said that already, but it needs to be said again.

Aristotle 3D model entirely generated from a text prompt being prepared for 3D printing [Source: Fabbaloo]

I 3D printed Aristotle to see how it came out, and also to demonstrate the end-to-end process of imagining an item and producing it in real life, and it definitely works.

3D print made from a 3D model created by Luma AI’s Imagine 3D text to 3D model service [Source: Fabbaloo]

How does Luma AI make this work? It hasn’t yet been described by the company, but it’s likely somewhat related to their existing 3D scanning technology, which I’ve been testing extensively.

Their beta 3D scanner is easily the best I’ve used, as it is able to overcome common problems of lighting, reflectivity and other challenging conditions. Their app is also extremely easy to use as it guides you through a specific scanning process in a way no other app does.

AR-based guiding for a Luma AI 3D scan [Source: Fabbaloo]

Their scanning app takes a number of still images and creates an AI model using “NeRF” technology (Neural Radiosity Field). This model can then recreate a 2D view of a subject from any given angle, even those where no image was taken. Thus you can “spin” a 3D model to see all side.

But from there it would be a simple matter to transform those views into an accurate 3D model. It may be that Imagine 3D injects multiple 2D views of a generated subject image into that same scanning process to create the 3D model.

Regardless of how it’s done, it definitely works.

I’m on the waitlist to try out Imagine 3D, and when I get into it, I will provide more details.

From some text to a real physical object in your hand, AI has accomplished something amazing [Source: Fabbaloo]

In the meantime, it may be time to think about the staggering implications of this technology, but that’s something for another post.

Via Luma AI

By Kerry Stevenson

Kerry Stevenson, aka "General Fabb" has written over 8,000 stories on 3D printing at Fabbaloo since he launched the venture in 2007, with an intention to promote and grow the incredible technology of 3D printing across the world. So far, it seems to be working!

Leave a comment