Home Interior Design OpenAI develops text-to-model platform capable of generating 3D images in seconds

OpenAI develops text-to-model platform capable of generating 3D images in seconds

by godlove4241
0 comment

Having already revolutionized artistic creation with DALL-E and writing with ChatGPTOpenAI remains committed to putting its mark on the 3D modeling space.

In a recent OpenAI paper, researchers Heewoo Jun and Alex Nichol described the development of Shap-E, a 3D text model that radically simplifies the means of generating 3D assets. It has the potential to disrupt the status quo in a range of industries, including architecture, interior design and gaming.

Form

Selection of 3D samples generated by Shap-E. Photo: Open AI.

Although still in the early stages of research and development, Shap-E will allow users to enter a text prompt that creates a 3D model, ones that can potentially be printed. For example, the researchers posted images of “a traffic cone”, “a chair that looks like a tree” and “an airplane that looks like a banana”.

Currently, producing 3D models requires considerable expertise in industry-specific software, such as 3ds Max, Autodesk Maya, and Blender.

An airplane that looks like a banana. GIF: Open AI

“We present Shap-E, a conditional generative model for 3D assets,” Jun and Nichol wrote in the paper. Shap-E: Generation of conditional 3D implicit functions. “When trained on a large dataset of matched 3D and textual data, our resulting models are able to generate complex and diverse 3D assets in seconds.”

Shape-E is OpenAI’s second foray into 3D modeling and follows Point-E, whose release in late 2022 coincided with ChatGPT, which monopolized media and consumer attention. Another reason for Point-E’s somewhat lackluster launch was the random results it produced. While Shap-E’s renders have yet to match the quality of industry competitors, its speed is alarming with the open-source software taking 13 seconds to produce an image from a text prompt.

Form

Comparison of Point-E and Shap-E images. Photo: Open AI.

In addition to speed, Shap-E’s renders have softer edges, lighter shadows, are less pixelated than its predecessor, and don’t rely on a reference image. The researchers said it “achieves comparable or better sample qualities despite modeling a higher dimension”.

Currently, OpenAI continues to work on Shap-E with researchers noting that the approximate results can be smoothed using other 3D generative programs, although further refinement may require OpenAI to work with larger, labeled 3D datasets. For now, 3D model enthusiasts can access the files and instructions on Shap-E GitHub open-source page.

Follow Artnet News on Facebook:


Want to stay one step ahead of the art world? Subscribe to our newsletter to receive breaking news, revealing interviews and incisive reviews that move the conversation forward.

You may also like

Leave a Comment

@2022 – All Right Reserved. Designed and Developed by artworlddaily