You probably don’t need SMERF for 3D printing, but I believe you will find it interesting.
SMERF is an acronym for “Streamable Memory Efficient Radiance Fields: for Real-Time Large-Scene Exploration.
Hold on, what’s that all about?
It deals with the emerging science of radiance fields in 3D capture. Traditionally, 3D scenes were captured using direct techniques such as laser / infrared scanning or photogrammetry, where a series of images taken at all angles are interpolated into a 3D model.
More recently work has been done to use radiosity fields and AI tools to capture 3D scenes. These rely on projecting the path of photons in a scene, and it can be quite effective. One 3D scanning app I use that leverages technology is Luma AI.
https://www.fabbaloo.com/news/hands-on-with-the-luma-nerf-3d-scanner-app
This app operates in a manner similar to traditional photogrammetry 3D scanning apps, but instead uses neural radiosity fields behind the scene to generate the 3D model. Basically the system is able to “guess” the missing angles and that allows reconstruction of the entire 3D model. It works very well.
Many have been using this technology to capture scenes, rather than objects. This is primarily for certain applications, such as real estate, virtual reality, gaming, etc. But there’s a problem: rendering the scenes in real time can be very difficult.
Enter SMERF. It’s an open source project that seems to solve this issue. They explain:
“In this work, we introduce SMERF, a view synthesis approach that achieves state-of-the-art accuracy among real-time methods on large scenes with footprints up to 300 m^2 at a volumetric resolution of 3.5 mm^3. Our method is built upon two primary contributions: a hierarchical model partitioning scheme, which increases model capacity while constraining compute and memory consumption, and a distillation training strategy that simultaneously yields high fidelity and internal consistency.”
You can imagine the issue here: 300 square meters at 3.5mm resolution is a staggering amount of data to process, particularly in real time. Scenes are typically used by online viewers to navigate through them. Imagine, for example, a home buyer sliding through the rooms of a prospective purchase.
It seems that the folks behind SMERF have devised an interesting way to partition the data in real time that allows for ultra-smooth rendering. They say the frame rate is three orders of magnitude faster than other radiance approaches. You can see how smooth SMERF works in this video:
This technology is not directly applicable to 3D printing applications, at least not yet. However, it does demonstrate there are significant developments in 3D technology that may at some point be applied to our industry.
Via GitHub (Hat tip to Bruce)