The DeanBeat: Nvidia CEO Jensen Huang says AI will auto-populate 3D imagery of the metaverse

AI is necessary to create a virtual world. In a Q&A discussion at online event GTC22, Nvidia CEO Jensen Huang said the AI ​​will automatically populate the 3D images of the metaverse.

He believes that the world is formed and transformed in the third and fourth dimensional world. When they find themselves coming into the universe, they will create and transform them. And while this is a very important claim, such intelligent AI is being made, Nvidia has proof of it.

Nvidia Research announced this morning that a new AI model could help build virtual worlds, created by increasing demand from businesses and developers, that can be more easily populated with 3D buildings, vehicles, characters and more.

Such mundane images are an immense waste of work. Nvidia said the real world is rich in diversity. There are unsurpassed streets and buildings scattered everywhere with various vehicles and the vast majority of passers-by. A detailed digital environment is difficult to model a 3D virtual world that reflects this data quickly and easily.

Nvidia wants to simplify such tasks with its Omniverse tools and cloud service. It hopes to make life easier for the developer when creating Metaverse applications. Let’s design this little experiment by reimagining everything that’s happening with DALL-E and other AI models this year.

Nvidia CEO Jensen Huang spoke at a keynote at GTC22.

I asked Huang in a press Q&A earlier this week how it could make the metaverse faster. He added Nvidia research, but the company hasn’t revealed the beans to date.

As you know, the metaverse is first created by the user. The idea is a craft or even an artificial machine, Huang said. And in the future it is very likely that some of the emeralds will describe a house or a city or something. And this city or that city or that city is like New York City, and it’s creating a new city for us. That’s fine. We can give him more requests. We can then just press Enter until one of the things we want to capture automatically appears. And then we will modify them from this world. And so I think that the AI ​​for creating virtual worlds is just being realized.

GET3D details

Nvidia GET3D was only trained on 2D images, but created 3D sculpted shapes with high-fidelity textures and intricate geometric details. These 3D objects were created in the same format used in most graphics software applications, allowing users to instantly import their shapes into 3D rendering and game engines for further manipulation.

The generated objects can be used in 3D representations of buildings, exteriors or cities of many kinds and are designed for applications such as gaming, robotics, architecture and social media.

GET3D can generate a virtually unlimited number of 3D shapes based on the data it trained on. The model turns a lump of clay into a detailed sculpture, as well as an artist turning figures into simple 3D shapes.

What’s even more important is the technology I talked about a second ago called large language models, he said. Learn from the universal humanities, imagine the 3D world. And so one day, through words, through a grand language model, triangles, geometry, textures and materials will come out. And then we could change it. And because nobody is the star of the sky and nobody is the star of a blizzard, it must be time to play physics sim and all light sims in real time. And that’s why the latest technologies being developed for RTX neuro-rendering are so important. Because we can’t do it by force. For that we need artificial intelligence.

For example, this data is based on 2D car images and creates a collection of cars, trucks, race cars, and vans. When trained on animals, there are creatures like foxes, rhinos, horses, and bears. Given the power of the model, the swivel, dining and chairs that produce various swivel and other chairs, like the lounge chairs.

GET3D brings us closer to democratizing 3D content creation, said Sanja Fidler, vice president of AI at Nvidia and head of the AI ​​lab that developed the tool in Toronto. It can generate one or two textured 3D shapes to help developers quickly reclaim virtual worlds with a diverse and interesting object.

GET3D is one of more than 20 lectures and workshops in Nvidia that became popular during the AI ​​conference, which was held in New Orleans practically no later than April 26th.

Nvidia said that previous 3D generative AI models, while faster than manual methods, were limited in the level of detail they could produce. Even newer inverse rendering methods can only generate 3D objects based on 2D images taken from different angles. However, development is required to produce one of the 3D structures at a time.

GET3D can instead generate 20 shapes per second using a single Nvidia GPU and generate 3D objects like a single or a single machine. As far as the material is limited, the size of the learned training database and the time required to create it is more varied and diverse.

Nvidia researchers taught GET3D using synthetic data based on two-dimensional images of three-dimensional shapes captured from different angles. After two days, the team was able to train this model with Nvidia GPUs on one million images.

It creates 3D textured meshes that are in the shape of a triangle mesh like a papier-mâché model with textured material. Using this method, you can easily import and edit objects into game engines, 3D models and 3D movies.

Once creators export shapes generated with GET3D to a graphics application, they can achieve realistic lighting effects so that the object moves or rotates in a scene. In addition to new AI tools like StyleGAN-NADA, developers can use text prompts to create specific images, e.g. B. to have Customs made by modifying renderings to become burned or a taxi, or to turn the Custom House into a Haunted House.

The research noted that a future version of GET3D could use camera pose estimation techniques to help developers train the model on real-world data rather than synthetic datasets. On the other hand, it could support universal generation. This means that developers can instantly train GET3D with all sorts of 3D shapes, instead of doing one task at a time.

Prologue is Brendan Greenes next project.

So, AI will be globalized, Huang said. The world will be made up of simulations and not just animations. And for that, Huang reckons he needs a new breed of data centers around the world. It’s called GDN, not CDN. It’s a network of graphics and tests over the Nvidia GeForce Now cloud. Nvidia used this service to create Omniverse Cloud, a suite of tools that can be used anywhere, even in the future. The GDN will host cloud games as well as Omniverse Cloud’s Metaverse tools.

This type of network could provide real-time computing required for the Metaverse.

It’s the interactivity that’s essentially instant, Huang said.

Are game developers asking about this? Then I know who I am. At the end of the year, Steven Greene announced the prologue and then started Project Artemis, in which he wanted to build a virtual world modeled on Earth. He said it’s just a combination of game design and user-generated content.

Ha! There was no holy shit.

https://game-news24.com/2022/09/23/the-deanbeat-nvidia-ceo-jensen-huang-says-ai-will-auto-populate-3d-imagery-of-the-metaverse/ The DeanBeat: Nvidia CEO Jensen Huang says AI will auto-populate 3D imagery of the metaverse

Screesnrantss

Pechip.com is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@pechip.com. The content will be deleted within 24 hours.

Related Articles

Back to top button