Nvidia sees itself as the "metaverse's" hardware lord, and has hinted at how a parallel 3D universe in which our cartoon selves can work, play, and interact operates.
Omni verse–an underlying hardware and software engine that acts as a planet core fusing together virtual communities in an alternate 3D universe–has received new plumbing from the chip industry. Omni verse is also being used to create avatars in cars, hospitals, and robots to enhance real-world experiences.
During a press conference, Richard Kerris, vice president of the Omniverse platform, stated, "We're not telling people to replace what they do; we're enhancing what they do."
Omniverse Avatar is one such announcement, which can generate interactive, intelligent AI avatars for things like assisting diners in ordering food or assisting a driver in self-parking or better navigating the roads.
Nvidia showed a conversational avatar that could be used to replace servers in restaurants. When ordering food, an AI system–represented by an on-screen avatar–could use speech recognition and natural intelligence techniques to converse in real time, as well as computer vision to capture a person's mood and recommend dishes based on the knowledge base.
For that, the avatar will need to run several AI models–for example, speech, image recognition and context–simultaneously, which can be a challenge. The company has created the Unified Compute Framework that models AI as microservices, so apps can run in a single or hybrid systems.Nvidia already has underlying AI systems like the Megatron-Turing Natural Language Generation model–a monolithic transformer language jointly developed with Microsoft. The system will be now offered on its DGX AI hardware.
Deepu Talla, vice president and general manager of Embedded and Edge Computing, said Omniverse Avatar is also the underlying technology in Drive Concierge, an in-car AI assistant that is a "personal concierge in the car that will be on call for you."
Through habits, voice, and interactions, AI systems in cars represented by interactive characters can understand a driver and the car's occupants. As a result, the AI system can make phone calls or make restaurant recommendations in the area.
Using cameras and other sensors, the system can also see if a driver is asleep, or alert a rider if they forget something in the car. The AI system's messages are represented through interactive characters or interfaces on screens.
The concept of a metaverse isn't new; it's been around since Linden Lab's Second Life and games like The Sims. Nvidia hopes to break down proprietary barriers and create a unified metaverse where users can theoretically jump between universes created by various companies.
Nvidia omitted assisting Facebook in realizing its vision of a future centered on the metaverse, which is at the heart of its rebranding to Meta, during the briefing.
But Nvidia is roping other companies into bringing their 3D work to the Omniverse platform through its software connectors. That list includes Esri's ArcGIS city Engine, which helps create urban environments in 3D, and Replica Studio's AI voice engine, which can simulate real voice for animated characters"What makes this all possible is the foundation of USD, or Universal Scene Description. USD is the HTML of 3D – an important element because it allows for all these software products to take advantage of the virtual worlds we are talking about," Kerris said. USD was created by Pixar to share 3D assets in a collaborative way.Nvidia also announced Omniverse Enterprise – a subscription offering with a software stack to help companies create 3D workflows that can be connected to the Omniverse platform. Priced at $9,000 per year, the offering is targeted at industry verticals like engineering and entertainment, and will be available through resellers that include Dell, Lenovo, PNY and Supermicro.
The Omniverse platform is also being used to generate synthetic data for training "digital twins," or virtual simulations of real-world objects. The ISAAC SIM can use synthetic data based on real-world and virtual data to train robots. The SIM enables the creation of custom data sets for robot training by allowing the introduction of new objects, camera views, and lighting.
Drive SIM is an automotive equivalent that uses simulated cameras to create realistic scenes for autonomous driving. The SIM uses real-world data to train AI models for autonomous driving. Real-world phenomena such as motion blur, rolling shutter, and LED flicker are simulated in the camera lens models.
Nvidia works closely with sensor makers to replicate Drive SIM data accurately. The camera, radar, lidar and ultrasonic sensor models are all path-traced using RTX graphics technology, according to Danny Shapiro, Nvidia's vice president for automotive.The company intertwined some hardware announcements in the overall Omniverse narrative.
You should also check out the following articles:
Subscribe now to our YouTube channel
Subscribe now to our Facebook Page
Subscribe now to our twitter page
Subscribe now to our Instagram
Subscribe To my personal page on linkedin
Subscribe To my personal page on tiktok page for those who love to dance :)
Don't forget to be my friend. Sign up for my friend's letter. So I can tell you ALL about the news from the world of VR&AR, plus as my new friends I will send you my new article on how to make money from VR&AR for free.