Text to 3D: The Massive Implications of Luma AI’s Imagine 3D Technology

By on December 16th, 2022 in Ideas, news

Tags: , , , , , ,

Using test to 3D AI to design 3D models? [Source: Fabbaloo / SD]

This week Luma AI announced a new 3D model generation technology, that while crude today, will ultimately dominate in the future.

As we reported, Luma AI has adapted their NeRF (Neural Radiosity Field) AI-based 3D scanning technology to enable an entirely new way to generate 3D models: Text to 3D.

Text to 3D

That’s right: you literally type in a sentence, and the system generates a fully solid — and printable — 3D model. It even includes a full color texture. Don’t believe me? Here is a 3D printed model that was created in this way, with no CAD used at all:

This is very similar to recently popular Text to Image systems, such as DALL-E 2, Midjourney or Stable Diffusion, except that instead of a 2D image, Luma AI’s “Imagine 3D” produces 3D models.

Is this just a common parlor trick? Well, the answer is yes — and definitely not.

The 3D models produced by Imagine 3D so far are actually pretty crude. You can see in my previous post that the sample I printed was decent, but certainly not of outstanding detail. The “highly detailed bust of Aristotle” was deficient in other ways: one ear is missing, and there’s a strange dent in the back where you’d expect body parts.

To me, that’s just details. The point here is that Luma AI was able to successfully produce a system that can generate relatively competent 3D models from a mere text prompt. Over time they, and certainly others, will refine this concept just as the Text to Image systems are improving.

This reminds me of the editorial cartoon with a dog playing chess, and someone complains that the “dog always loses”. This is the same scenario: it’s amazing that this CAN BE DONE AT ALL.

Eventually we’ll have a system that can readily produce 3D models on demand in reasonable quality — with NO CAD systems or effort required. All you’ll need is an imagination, skills in developing a suitable prompt for the system, and (probably) a subscription to the AI CAD system.

What are the implications of such a technology loose upon the world? There are many, but here’s two big ones that I see coming, and coming fast.

Consumer 3D Printing

Using voice to design 3D models? [Source: Fabbaloo / DE]

Years ago there was a buzz about people suddenly being able to have 3D printers in their homes. Anything could be 3D printed! Put one in every room!

The buzz subsided when eager consumers discovered that the machines weren’t particularly reliable, and, more importantly, required 3D content to print that they could not make themselves. In response, a number of online repositories of 3D models appeared, often made by the 3D printer manufacturers themselves to support their sales efforts.

These repositories mostly failed general consumers because it’s essentially impossible to hold “all the models”. You simply don’t know what people want, so the game was to add more and more 3D models. Today Thingiverse is closing in on 6M 3D models — and it’s nearly impossible to find anything specific. Searching for 3D models has proven impossible: you either find nothing you want, or you find 12,441 vases and you don’t have a week to scroll through them.

Now the tables have turned: with a competent Text to 3D system, literally any consumer could “create” content on demand and it would be ready to print.

There will now be instant content for consumer 3D printers.

Smart 3D printer manufacturers will recognize this and begin to incorporate this functionality directly into their devices. It would seem feasible to build a 3D printer that could be verbally asked to “make a teacup”, and it would do so without any other input.

Just as people are slowly starting to shift from stock photo services to AI image generators, the same might happen with 3D models.

Believe me, this is going to happen, sooner than you think.

The result of this capability should be a massive increase in demand for consumer-level desktop 3D printers equipped with Text to 3D capability. With that will inevitably be all kinds of new services and capabilities as the economies of scale finally, after decades, kicks in.

Wither CAD?

Another big implication is the effects on CAD tools. Today there is an enormous spectrum of tools, with specializing in designing particular types of objects. Sophistication and pricing of these tools ranges from zero to “everything”.

My suspicion is that the lower level, near-free CAD tools will be impacted first. Rather than using, say, Tinkercad, to produce a dragon 3D model, a Text to 3D tool could do just as well, and a heck of a lot faster.

However, if you’ve seen any of the 3D models produced by Imagine 3D, you’d notice they are not particularly precise, lack details and absolutely could never be used as mechanical parts.

Or could they? Where might this technology go in the future?

What if the 3D models were trained on vast libraries of mechanical parts instead of random cartoon shapes? What if the AI tools incorporated fixed dimensional constraints into the generation process? It just might be possible to generate accurate parts. Correction: it WILL be possible to do this, just not immediately.

Another approach could be used within CAD tools. Imagine SOLIDWORKS being able to quickly generate a basic part shape using a Text to 3D capability. Then the CAD user could then “work on” that generated model to polish it up and add or adjust any required features. That might speed up 3D model creation by several factors.

Another way CAD tools could use this technology is similar to what the AI image generators call “Image to Image”. In this approach, an image is loaded up and then text commands are used to modify it, like “remove that guy’s moustache”, “add a jar of pickles on the table”, “add some dust to all surfaces”, and so on. I can imagine loading in (or generating) a base 3D model and then using text commands of that type to modify it.

I see two ways forward here for CAD.

First, existing CAD tool producers might begin to incorporate this functionality into their product suites as described above. This way they could ride the AI wave forward and increasingly spice up their products with AI mojo.

Alternatively, existing CAD tool producers might ignore all this — at their peril. They might then be subject to growing competition from new startups that engineer new styles of CAD tools that are based on and fully leverage Text to 3D concepts.

Which way will the CAD tool producers go? We cannot know, but they’d better be talking about this RFN.

By Kerry Stevenson

Kerry Stevenson, aka "General Fabb" has written over 8,000 stories on 3D printing at Fabbaloo since he launched the venture in 2007, with an intention to promote and grow the incredible technology of 3D printing across the world. So far, it seems to be working!

Leave a comment