Curated by THEOUTPOST
On Tue, 30 Jul, 12:07 AM UTC
2 Sources
[1]
NVIDIA Brings Physical AI Through NIM Microservices for Digital Environments
Jensen Huang's vision of 'Physical AI' is getting real through NIM and visual AI agents. NVIDIA has introduced new NVIDIA NIM microservices and the NVIDIA Metropolis reference workflow, significantly advancing generative physical AI. These developments, announced at SIGGRAPH, include three fVDB NIM microservices supporting NVIDIA's deep learning framework for 3D worlds and USD Code, USD Search, and USD Validate microservices for working with Universal Scene Description (OpenUSD). These tools enable developers to integrate generative AI copilots and agents into USD workflows, expanding the capabilities of 3D worlds. Physical AI, which uses advanced simulations and learning methods, is transforming sectors like manufacturing and healthcare by enhancing the ability of robots and infrastructure to perceive, reason, and navigate. Interestingly, NVIDIA chief Jensen Huang had termed the next wave of AI as Physical AI. NVIDIA offers a range of NIM microservices tailored to specific models and industries, supporting speech and translation, vision and intelligence, and realistic animation. Visual AI agents, powered by vision language models (VLMs), are increasingly deployed in hospitals, factories, and cities. In Palermo, Italy, NVIDIA NIM-powered agents help manage traffic efficiently. They have deployed visual AI agents using NVIDIA NIM to uncover physical insights that help them better manage roadways. Companies such as Foxconn and Pegatron also use the same to design and operate virtual factories, improving safety and efficiency. NVIDIA's physical AI software, including VLM NIM microservices, facilitates a "simulation-first" approach, crucial for industrial automation projects. These tools enable the creation of digital twins, simulating real-world conditions for better AI model training. Synthetic data from these simulations can replace costly and hard-to-obtain real-world datasets, enhancing model accuracy and performance. NVIDIA's NIM microservices and Omniverse Replicator are key in building these synthetic data pipelines.
[2]
AI Gets Physical: New NVIDIA NIM Microservices Bring Generative AI to Digital Environments
Millions of people already use generative AI to assist in writing and learning. Now, the technology can also help them more effectively navigate the physical world. NVIDIA announced at SIGGRAPH generative physical AI advancements including the NVIDIA Metropolis reference workflow for building interactive visual AI agents and new NVIDIA NIM microservices that will help developers train physical machines and improve how they handle complex tasks. These include three fVDB NIM microservices that support NVIDIA's new deep learning framework for 3D worlds, as well as the USD Code, USD Search and USD Validate NIM microservices for working with Universal Scene Description (aka OpenUSD). The NVIDIA OpenUSD NIM microservices work together with the world's first generative AI models for OpenUSD development - also developed by NVIDIA - to enable developers to incorporate generative AI copilots and agents into USD workflows and broaden the possibilities of 3D worlds. Physical AI uses advanced simulations and learning methods to help robots and other industrial automation more effectively perceive, reason and navigate their surroundings. The technology is transforming industries like manufacturing and healthcare, and advancing smart spaces with robots, factory and warehouse technologies, surgical AI agents and cars that can operate more autonomously and precisely. NVIDIA offers a broad range of NIM microservices customized for specific models and industry domains. NVIDIA's suite of NIM microservices tailored for physical AI supports capabilities for speech and translation, vision and intelligence, and realistic animation and behavior. Visual AI agents use computer vision capabilities to perceive and interact with the physical world and perform reasoning tasks. Highly perceptive and interactive visual AI agents are powered by a new class of generative AI models called vision language models (VLMs), which bridge digital perception and real-world interaction in physical AI workloads to enable enhanced decision-making, accuracy, interactivity and performance. With VLMs, developers can build vision AI agents that can more effectively handle challenging tasks, even in complex environments. Generative AI-powered visual AI agents are rapidly being deployed across hospitals, factories, warehouses, retail stores, airports, traffic intersections and more. To help physical AI developers more easily build high-performing, custom visual AI agents, NVIDIA offers NIM microservices and reference workflows for physical AI. The NVIDIA Metropolis reference workflow provides a simple, structured approach for customizing, building and deploying visual AI agents, as detailed in the blog. City traffic managers in Palermo, Italy, deployed visual AI agents using NVIDIA NIM to uncover physical insights that help them better manage roadways. K2K, an NVIDIA Metropolis partner, is leading the effort, integrating NVIDIA NIM microservices and VLMs into AI agents that analyze the city's live traffic cameras in real time. City officials can ask the agents questions in natural language and receive fast, accurate insights on street activity and suggestions on how to improve the city's operations, like adjusting traffic light timing. Leading global electronics giants Foxconn and Pegatron have adopted physical AI, NIM microservices and Metropolis reference workflows to more efficiently design and run their massive manufacturing operations. The companies are building virtual factories in simulation to save significant time and costs. They're also running more thorough tests and refinements for their physical AI - including AI multi-camera and visual AI agents - in digital twins before real-world deployment, improving worker safety and leading to operational efficiencies. Many AI-driven businesses are now adopting a "simulation-first" approach for generative physical AI projects involving real-world industrial automation. Manufacturing, factory logistics and robotics companies need to manage intricate human-worker interactions, advanced facilities and expensive equipment. NVIDIA physical AI software, tools and platforms - including physical AI and VLM NIM microservices, reference workflows and fVDB - can help them streamline the highly complex engineering required to create digital representations or virtual environments that accurately mimic real-world conditions. VLMs are seeing widespread adoption across industries because of their ability to generate highly realistic imagery. However, these models can be challenging to train because of the immense volume of data required to create an accurate physical AI model. Synthetic data generated from digital twins using computer simulations offers a powerful alternative to real-world datasets, which can be expensive - and sometimes impossible - to acquire for model training, depending on the use case. Tools like NVIDIA NIM microservices and Omniverse Replicator let developers build generative AI-enabled synthetic data pipelines to accelerate the creation of robust, diverse datasets for training physical AI. This enhances the adaptability and performance of models such as VLMs, enabling them to generalize more effectively across industries and use cases. Developers can access state-of-the-art, open and NVIDIA-built foundation AI models and NIM microservices at ai.nvidia.com. The Metropolis NIM reference workflow is available in the GitHub repository, and Metropolis VIA microservices are available for download in developer preview. OpenUSD NIM microservices are available in preview through the NVIDIA API catalog. Watch how accelerated computing and generative AI are transforming industries and creating new opportunities for innovation and growth in NVIDIA founder and CEO Jensen Huang's fireside chats at SIGGRAPH.
Share
Share
Copy Link
NVIDIA unveils NIM microservices, a groundbreaking technology that integrates generative AI into digital environments, enabling more realistic and interactive virtual experiences across various industries.
NVIDIA, a leader in AI and computing, has announced a significant advancement in the field of artificial intelligence with the introduction of NVIDIA NIM (Neural Interface Microservices) 1. This innovative technology aims to bring "physical AI" to digital environments, marking a new era in the integration of AI with virtual worlds.
NIM microservices are designed to infuse generative AI capabilities into digital environments, allowing for more realistic and interactive experiences 2. These microservices enable the creation of intelligent virtual objects and characters that can understand and respond to user inputs in natural language, significantly enhancing the immersion and functionality of digital spaces.
The potential applications of NIM microservices span various sectors:
NIM microservices are built on NVIDIA's Omniverse platform and utilize large language models (LLMs) to process and generate responses [1]. They can be deployed on NVIDIA GPUs in the cloud or on-premises, offering flexibility in implementation. The microservices are designed to be modular, allowing developers to integrate specific AI capabilities into their applications as needed.
NVIDIA has announced collaborations with several companies to implement NIM microservices:
These partnerships demonstrate the wide-ranging potential of NIM microservices across different sectors [2].
The introduction of NIM microservices represents a significant step towards more immersive and interactive digital experiences. As this technology evolves, it could lead to:
As NVIDIA continues to develop and refine this technology, it is likely to play a crucial role in shaping the future of AI-driven digital environments.
Reference
[1]
Analytics India Magazine
|NVIDIA Brings Physical AI Through NIM Microservices for Digital EnvironmentsNVIDIA announces groundbreaking AI technologies at SIGGRAPH 2023, including digital human assistants, physical AI advancements, and AI-powered marketing tools, signaling a new era of AI integration across industries.
4 Sources
NVIDIA announces new generative AI models and NIM microservices to enhance OpenUSD, aiming to revolutionize 3D content creation, robotics, and digital twins across various industries.
4 Sources
Nvidia announces a range of new Nvidia AI Microservices (NIMs) to accelerate generative AI applications. The company also introduces industrial AI solutions and expands its partnership with Hugging Face for inference services.
3 Sources
NVIDIA introduces NIM Agent Blueprints, a new tool designed to simplify and accelerate the creation of AI-powered enterprise applications. This innovation aims to democratize AI development for businesses of all sizes.
5 Sources
NVIDIA and AWS announce major collaborations at AWS re:Invent, introducing new AI tools, robotics simulations, and quantum computing solutions to enhance cloud-based development and deployment.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved