Nvidia Research has unveiled GANVerse3D, a new AI-powered technology that can turn a simple photo into animated, customizable 3D modeling.
Once again, the artificial intelligence developed by Nvidia is showing off its technological prowess. Trained thanks to the GAN (Generative Adversarial Network) which leads him to review hundreds of thousands of images, and to the ADA (Adaptive Discriminator Increase) which transforms images adaptively and selectively to refine them, AI can thus compensate for the lack of a sufficiently large library to enrich itself with information that must then be grinded.
And the latest exploitation of the possibilities of this AI is quite impressive.
Inflate the photo to make it a 3D model
Nvidia Research has developed a new engine for deep learning which will make it possible to create 3D models of objects from simple 2D images. All this goes through a new application called GANVerse3D and developed by the Nvidia AI Research Lab in Toronto.
Fueled by thousands of images ingested beforehand, GANVerse3D will be able to “inflate the image” to bring it to life in 3D and even animate it on a computer in a virtual environment.
Nvidia hopes to attract architects, video game creators, designers, or even storyboarders of cinema, by offering an easy-to-use tool, ” even for those who have never modeled in 3D», To add elements to a rendering, a model, a draft …
To prove the strength of its engine, Nvidia modeled KITT, the famous car of the series K2000 . From a simple photo, the black car with the red LED strip found itself virtualized on a 3D computer, driving in a virtual scene, with the lights on or flashing and light effects on the body.
The Nvidia Omniverse Kit and Nvidia PhysX Tools then predicted a texture with high quality materials that could be KITT’s to give it a bit more realism. GANVerse3D is thus presented as an extension of Omniverse “to help artists create richer virtual worlds for developing games, doing city planning or designing new learning models», Adds Jean-François Lafleche, specialist indeep learningat Nvidia.
The American firm then showed how easy it was to change the color, texture and any element of the vehicle while making it evolve in its decor.
Doable with objects or horses, not people yet
To achieve this, it was necessary to feed GANVerse3D with more than 55,000 images of vehicles of all kinds, “real pictures», Insists Juan Gao, researcher and author of the project. Photos taken from different angles were thus integrated, then synthesized using Nvidia’s GAN technology. And the process started to mesh the 2D photos and give a 3D rendering. Then just inject a single photo of a model for the software to do its job and get a rendering quickly.
YouTube linkSubscribe to Frandroid
Supported by a 3D neural rendering engine, developers can then control the customization of the object, change the background. Run with an Nvidia RTX GPU and the recently announced Nvidia Omiverse platform designed for companies that want to collaborate in real time in 3D, GANVerse3D will be able to recreate the 3D model in less than a second. The rendering is not as precise and polished as the original, but the point is not there. It’s about getting a result quickly, inexpensively, and easy to use.
The strength of Nvidia researchers is to have succeeded in improving their GAN model, fed by thousands of photos, so that its AI can generate data in order to create a 3D object from a simple 2D photo. To speed up the process and generate the data quickly, the technology chooses to render with a perspective of the vehicle at a certain height and at a defined camera distance.
At the moment, GANVerse3D works perfectly with vehicles, horses, buildings, fixed geometric objects and even human faces. For the whole body, NVIDIA explains that it does not have enough elements in terms of possible movements to be able to obtain a satisfactory result. But it is only a matter of time.