While discussions around Generative AI often revolve around the future and its potential impact, it's essential to recognize how it's utilized in the present. This week, we'll dive into how 3D artists are currently leveraging AI and explore ways for you to incorporate it into your workflow.
GenAI for Concepting
One of the most obvious applications of Generative AI is crafting concept images from text prompts. Many artists already use these AI-generated images in their exploration and ideation process. However, a significant challenge with this workflow is the lack of control over composition. When conceptualizing, artists typically have a rough layout in mind, which can be challenging to achieve with text prompts alone.
This is where 3D workflows can drive and guide the Generative AI output. By setting up the camera and composing the basic elements of the scene in your preferred 3D software, you can render out a frame and use it to direct the composition of the AI-generated image. This workflow is available in tools like Stable Diffusion and Firefly, as demonstrated in the video below (find a full tutorial at the end of this newsletter).
This approach empowers creatives to gain control over the composition of their generated images. You might wonder, "Why bother exporting an image when I could do everything in my 3D software?" Well, that's precisely what some artists are exploring.
GenAI as a Renderer
Artists like Martin Nebelong are pioneering the use of Generative AI as a renderer within 3D software like Blender. By leveraging tools like Krea AI and Stable Diffusion in Blender AI Render, artists can achieve near-immediate render times while hallucinating details that might be too difficult or time-consuming to create traditionally.
Video by Martin Nebelong https://twitter.com/MartinNebelong/status/1724919110830633328
While these AI hallucinations can be challenging to control and may exhibit quirks, the implications and use cases are compelling, especially for rapid ideation and visualization.
GenAI for Texturing
Several tools like Runway AI, Polycam, and Substance 3D Sampler’s new Beta Release enable text-to-materials generation for those seeking texturing assistance. Similar to text-to-image, these tools allow you to create tileable materials and, in some instances, even individual texture maps like base color, roughness, and metalness.
GenAI for Background Images
Many 3D artists work on creating digital twins of real-world objects, pouring countless hours into perfecting the design. When it comes time to display their creations for reviews or marketing purposes, they often want to contextualize their designs in relevant environments – a coffee shop for a coffee brand package, a muddy terrain for a rugged boot, or an art gallery for a sculptural installation.
Rather than modeling and texturing entire environments solely for contextualization, artists are leveraging Generative AI to generate 2D background images. They then integrate their hero 3D objects into these AI-generated backgrounds, creating quick proof-of-concept visuals. The latest Substance 3D Stager includes Firefly, allowing for easy background generation and integration using the "Match Image" feature.
GenAI for 3D Modeling
It's worth mentioning that several companies have also released text-to-3D modeling capabilities. While the outputs may still lack the quality necessary for many 3D artists, they can be beneficial for early design stages or immersive experiences. Companies like Bezi, Luma AI, Meshy, 3DFY.ai, Magic3D, and NVidia’s DreamFusion are exploring this workflow, allowing users to generate usable 3D models from text prompts, which can serve as proxies before final modeling.
Conclusion
While there's still much to consider regarding Generative AI in 3D workflows, these workflows are already seeping into the industry. If we had to summarize their current use, it would be:
Use AI to fill out the world around your hero objects.
At present, AI workflows cannot create production-ready hero objects with the necessary fidelity, precise materials, or controllable rendering. However, they excel at quickly ideating and filling out the 3D world around your hero character or prop.
If I've missed any noteworthy workflows, feel free to reach out on LinkedIn or via email. I’m always eager to discuss the latest developments!
3D News of the Week
Renderman 26 released! - Renderman/Pixar
A Stylized 3D Sculpt of The Office's Dwight Made in Blender - 80.lv
Autodesk’s Next-Generation Viewport System - Nvidia.com
Future Apple Vision Pro may get Bob Ross-style virtual painting tools - Apple Insider
The dry-for-wet virtual production behind the stunning plane escape in ‘No Way Up’ - Before and Afters
3D Merch is here!
Click here to Get Your 3D Artist Swag!
3D Tutorials
3D Job Spreadsheet
Link to Google Doc With A TON of Jobs in Animation (not operated by me)
Check Out The New Wednesday Artist Spotlight Email!
Want to be featured!?!?! Submit your work here
Hello! Michael Tanzillo here. I am the Head of Technical Artists with the Substance 3D Growth team at Adobe. Previously, I was a Senior Artist on animated films at Blue Sky Studios/Disney with credits including three Ice Age movies, two Rios, Peanuts, Ferdinand, Spies in Disguise, and Epic.
In addition to his work as an artist, I am the Co-Author of the book Lighting for Animation: The Visual Art of Storytelling and the Co-Founder of The Academy of Animated Art, an online school that has helped hundreds of artists around the world begin careers in Animation, Visual Effects, and Digital Imaging. I also created The3DArtist.Community and this newsletter.
www.michaeltanzillo.com
Free 3D Tutorials on the Michael Tanzillo YouTube Channel
Thanks for reading The 3D Artist! Subscribe for free to receive new posts and support my work. All views and opinions are my own!