The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Tue, 30 Jul, 12:07 AM UTC
4 Sources
[1]
NVIDIA : 'Everybody Will Have An AI Assistant,' NVIDIA CEO Tells SIGGRAPH Audience
The generative AI revolution - with deep roots in visual computing - is amplifying human creativity even as accelerated computing promises significant gains in energy efficiency, NVIDIA founder and CEO Jensen Huang said Monday. That makes this week's SIGGRAPH professional graphics conference, in Denver, the logical venue to discuss what's next. "Everybody will have an AI assistant," Huang said. "Every single company, every single job within the company, will have AI assistance." But even as generative AI promises to amplify human productivity, Huang said the accelerated computing technology that underpins it promises to make computing more energy efficient. "Accelerated computing helps you save so much energy, 20 times, 50 times, and doing the same processing," Huang said. "The first thing we have to do, as a society, is accelerate every application we can: this reduces the amount of energy being used all over the world." The conversation follows a spate of announcements from NVIDIA today. NVIDIA introduced a new suite of NIM microservices tailored for diverse workflows, including OpenUSD, 3D modeling, physics, materials, robotics, industrial digital twins and physical AI. These advancements aim to enhance developer capabilities, particularly with the integration of Hugging Face Inference-as-a-Service on DGX Cloud. In addition, Shutterstock has launched a Generative 3D Service, while Getty Images has upgraded its offerings using NVIDIA Edify technology. In the realm of AI and graphics, NVIDIA has revealed new OpenUSD NIM microservices and reference workflows designed for generative physical AI applications. This includes a program for accelerating humanoid robotics development through new NIM microservices for robotics simulation and more. Finally, WPP, the world's largest advertising agency, is using Omniverse-driven generative AI for The Coca-Cola Company, helping drive brand authenticity, showcasing the practical applications of NVIDIA's advancements in AI technology across various industries. Huang and Goode started their conversation by exploring how visual computing gave rise to everything from computer games to digital animation to GPU-accelerated computing and, most recently, generative AI powered by industrial-scale AI factories. All these advancements build on one another. Robotics, for example, requires advanced AI and photorealistic virtual worlds where AI can be trained before being deployed into next-generation humanoid robots. Huang explained that robotics requires three computers: one to train the AI, one to test the AI in a physically accurate simulation, and one within the robot itself. "Just about every industry is going to be affected by this, whether it's scientific computing trying to do a better job predicting the weather with a lot less energy, to augmenting and collaborating with creators to generate images, or generating virtual scenes for industrial visualization," Huang said. "Robotic self-driving cars are all going to be transformed by generative AI." Likewise, NVIDIA Omniverse systems - built around the OpenUSD standard - will also be key to harnessing generative AI to create assets that the world's largest brands can use. By pulling from brand assets that live in Omniverse, which can capture brand assets, these systems can capture and replicate carefully curated brand magic. Finally, all these systems - visual computing, simulation and large-language models - will come together to create digital humans who can help people interact with digital systems of all kinds. "One of the things that we're announcing here this week is the concept of digital agents, digital AIs that will augment every single job in the company," Huang said. "And so one of the most important use cases that people are discovering is customer service," Huang said. "In the future, my guess is that it's going to be human still, but AI in the loop." All of this, like any new tool, promises to amplify human productivity and creativity. "Imagine the stories that you're going to be able to tell with these tools," Huang said.
[2]
NVIDIA : New NVIDIA Digital Human Technologies Enhance Customer Interactions Across Industries
Generative AI is unlocking new ways for enterprises to engage customers through digital human avatars. At SIGGRAPH, NVIDIA previewed James, an interactive digital human that can connect with people using emotions, humor and more. James is based on a customer-service workflow using NVIDIA ACE, a reference design for creating custom, hyperrealistic, interactive avatars. Users will soon be able to talk with James in real time at ai.nvidia.com. NVIDIA also showcased at the computer graphics conference the latest advancements to the NVIDIA Maxine AI platform, including Maxine 3D and Audio2Face-2D for an immersive telepresence experience. Developers can use Maxine and NVIDIA ACE digital human technologies to make customer interactions with digital interfaces more engaging and natural. ACE technologies enable digital human development with AI models for speech and translation, vision, intelligence, lifelike animation and behavior, and realistic appearance. Companies across industries are using Maxine and ACE to deliver immersive virtual customer experiences. Built on top of NVIDIA NIM microservices, James is a virtual assistant that can provide contextually accurate responses. Using retrieval-augmented generation (RAG), James can accurately tell users about the latest NVIDIA technologies. ACE allows developers to use their own data to create domain-specific avatars that can communicate relevant information to customers. James is powered by the latest NVIDIA RTX rendering technologies for advanced, lifelike animations. His natural-sounding voice is powered by ElevenLabs. NVIDIA ACE lets developers customize animation, voice and language when building avatars tailored for different use cases. Maxine, a platform for deploying cutting-edge AI features that enhance the audio and video quality of digital humans, enables the use of real-time, photorealistic 2D and 3D avatars with video-conferencing devices. Maxine 3D converts 2D video portrait inputs into 3D avatars, allowing the integration of highly realistic digital humans in video conferencing and other two-way communication applications. The technology will soon be available in early access. Audio2Face-2D, currently in early access, animates static portraits based on audio input, creating dynamic, speaking digital humans from a single image. Try the technology at ai.nvidia.com. HTC, Looking Glass, Reply and UneeQ are among the latest companies using NVIDIA ACE and Maxine across a broad range of use cases, including customer service agents, and telepresence experiences in entertainment, retail and hospitality. At SIGGRAPH, digital human technology developer UneeQ is showcasing two new demos. The first spotlights cloud-rendered digital humans powered by NVIDIA GPUs with local, in-browser computer vision for enhanced scalability and privacy, and animated using the Audio2Face-3D NVIDIA NIM microservice. UneeQ's Synapse technology processes anonymized user data and feeds it to a large language model (LLM) for more accurate, responsive interactions. The second demo runs on a single NVIDIA RTX GPU-powered laptop, featuring an advanced digital human powered by Gemma 7B LLM, RAG and the NVIDIA Audio2Face-3D NIM microservice. Both demos showcase UneeQ's NVIDIA-powered efforts to develop digital humans that can react to users' facial expressions and actions, pushing the boundaries of realism in virtual customer service experiences. HTC Viverse has integrated the Audio2Face-3D NVIDIA NIM microservice into its VIVERSE AI agent for dynamic facial animation and lip sync, allowing for more natural and immersive user interactions. Hologram technology company Looking Glass' Magic Mirror demo at SIGGRAPH uses a simple camera setup and Maxine's advanced 3D AI capabilities to generate a real-time holographic feed of users' faces on its newly launched, group-viewable Looking Glass 16-inch and 32-inch Spatial Displays. Reply is unveiling an enhanced version of Futura, its cutting-edge digital human developed for Costa Crociere's Costa Smeralda cruise ship. Powered by Audio2Face-3D NVIDIA NIM and Riva ASR NIM microservices, Futura's speech-synthesis capabilities tap advanced technologies including GPT-4o, LlamaIndex for RAG and Microsoft Azure text-to-speech services. Futura also incorporates Reply's proprietary affective computing technology, alongside Hume AI and MorphCast, for comprehensive emotion recognition. Built using Unreal Engine 5.4.3 and MetaHuman Creator with NVIDIA ACE-powered facial animation, Futura supports six languages. The intelligent assistant can help plan personalized port visits, suggest tailored itineraries and facilitate tour bookings. In addition, Futura refines recommendations based on guest feedback and uses a specially created knowledge base to provide informative city presentations, enhancing tourist itineraries. Futura aims to enhance customer service and offer immersive interactions in real-world scenarios, leading to streamlined operations and driving business growth. Learn more aboutNVIDIA ACE andNVIDIA Maxine.
[3]
Nvidia begins ushering in next wave of AI - physical AI: SIGGRAPH 2024
Nvidia (NASDAQ:NVDA) CEO Jensen Huang previewed what he views as the next wave of generative artificial intelligence, physical AI, at the SIGGRAPH conference in Denver on Monday. "The first wave was accelerated computing, which lowers the amount of energy used," Huang said. "The next wave of AI was enterprise - customer service. We want to give every organization the chance to create their own AIs." "The next wave is physical AI," he added. "It requires three computers. One computer to create AI, another to send commands to a robot, and a third computer to handle it all." "We are entering the age of the AI-powered humanoid robot," said a voice during a video presentation on physical AI. Nvidia revealed in detail how it is accelerating humanoid robotics development at the conference. New offerings include Nvidia NIM and OSMO NIM microservices and frameworks for running multi-stage robotics workloads. Also, the MimicGen NIM microservice "generates synthetic motion data based on recorded teleoperated data from spatial computing devices like Apple (AAPL) Vision Pro," according to Nvidia. "The Robocasa NIM microservice generates robot tasks and simulation-ready environments in OpenUSD, a universal framework for developing and collaborating within 3D worlds." Several microservices are now available in preview, with more coming soon, Nvidia said. Earlier during the discussion, Huang revealed an AI-powered customer service agent. It contains knowledge of human customer service agents, and interacts visually with customers, even making facial changes appropriate to the tone of the conversation. WIRED senior reporter Lauren Goode questioned the ethics of using an AI customer service agent that genuinely appears human. "It's still pretty robotic, and that's not a terrible thing," Huang responded. "We've made this digital human technology realistic, but we still know it's a robot ... This could help someone who is lonely, elderly. Having someone much more human than a text box is a good thing." Huang painted the picture of a world where people in all types of industries have their own personal AIs that have learned how to act and respond in a way similar to their human tutor. Huang said he envisions creating an AI version of himself built on a collection of everything he has said or written that could generate responses nearly identical to his own. He wants every employee in his company to have their own AI assistant. "Nvidia's latest designs wouldn't be possible without AI," Huang said. "Without AI there would not be a Hopper or the Blackwell. The concept is for digital AIs that will be able to augment every single job in the company." "We invent tools here," he added. "We create tools to do bigger and better work - to do things we couldn't do before."
[4]
NVIDIA : Supercharges Digital Marketing With Greater Control Over Generative AI
The world's brands and agencies are using generative AI to create advertising and marketing content, but it doesn't always provide the desired outputs. NVIDIA offers a comprehensive set of technologies - bringing together generative AI, NVIDIA NIM microservices, NVIDIA Omniverse and Universal Scene Description (OpenUSD) - to allow developers to build applications and workflows that enable brand-accurate, targeted and efficient advertising at scale. Developers can use the USD Search NIM microservice to provide artists access to a vast archive of OpenUSD-based, brand-approved assets - such as products, props and environments - and when integrated with the USD Code NIM microservice, assembly of these scenes can be accelerated. Teams can also use the NVIDIA Edify-powered Shutterstock Generative 3D service to rapidly generate 3D new assets using AI. The scenes, once constructed, can be rendered to a 2D image and used as input to direct an AI-powered image generator to create precise, brand-accurate visuals. Global agencies, developers and production studios are tapping these technologies to revolutionize every aspect of the advertising process, from creative production and content supply chain to dynamic creative optimization. WPP announced at SIGGRAPH its adoption of the technologies, naming The Coca-Cola Company the first brand to embrace generative AI with Omniverse and NVIDIA NIM microservices. The NVIDIA Omniverse development platform has seen widespread adoption for its ability to build accurate digital twins of products. These virtual replicas allow brands and agencies to create ultra-photorealistic and physically accurate 3D product configurators, helping to increase personalization, customer engagement and loyalty, and average selling prices, and reducing return rates. Digital twins can also serve many purposes and be updated to meet shifting consumer preferences with minimal time, cost and effort, helping flexibly scale content production. The NVIDIA Omniverse development platform has seen widespread adoption for its ability to build accurate digital twins of products. These virtual replicas allow brands and agencies to create ultra-photorealistic and physically accurate 3D product configurators, helping to increase personalization, customer engagement and loyalty, and average selling prices, and reducing return rates. Digital twins can also serve many purposes and be updated to meet shifting consumer preferences with minimal time, cost and effort, helping flexibly scale content production. Image courtesy of Monks, Hatch. Global marketing and technology services company Monks developed Monks.Flow, an AI-centric professional managed service that uses the Omniverse platform to help brands virtually explore different customizable product designs and unlock scale and hyper-personalization across any customer journey. "NVIDIA Omniverse and OpenUSD's interoperability accelerates connectivity between marketing, technology and product development," said Lewis Smithingham, executive vice president of strategic industries at Monks. "Combining Omniverse with Monks' streamlined marketing and technology services, we infuse AI throughout the product development pipeline and help accelerate technological and creative possibilities for clients." Collective World, a creative and technology company, is an early adopter of real-time 3D, OpenUSD and NVIDIA Omniverse, using them to create high-quality digital campaigns for customers like Unilever and EE. The technologies allow Collective to develop digital twins, delivering consistent, high-quality product content at scale to streamline advertising and marketing campaigns. Building on its use of NVIDIA technologies, Collective World announced at SIGGRAPH that it has joined the NVIDIA Partner Network. Product digital twin configurator and content generation tool built by Collective on NVIDIA Omniverse. INDG is using Omniverse to introduce new capabilities into Grip, its popular software tool. Grip uses OpenUSD and generative AI to streamline and enhance the creation process, delivering stunning, high-fidelity marketing content faster than ever. "This integration helps bring significant efficiencies to every brand by delivering seamless interoperability and enabling real-time visualization," said Frans Vriendsendorp, CEO of INDG. "Harnessing the potential of USD to eliminate the lock-in to proprietary formats, the combination of Grip and Omniverse are helping set new standards in the realm of digital content creation." Image generated with Grip, copyright Beiersdorf To get started building applications and services using OpenUSD, Omniverse and NVIDIA AI, check out the product configurator developer resources and the generative AI workflow for content creation reference architecture, or submit a contact form to learn more or connect with NVIDIA's ecosystem of service providers.
Share
Share
Copy Link
NVIDIA announces groundbreaking AI technologies at SIGGRAPH 2023, including digital human assistants, physical AI advancements, and AI-powered marketing tools, signaling a new era of AI integration across industries.
At SIGGRAPH 2023, NVIDIA CEO Jensen Huang made a bold prediction: "Everybody will have an AI assistant" [1]. This statement underscores NVIDIA's commitment to advancing AI technologies and their integration into everyday life. The company's latest innovations aim to make this vision a reality, with developments spanning digital humans, physical AI, and marketing tools.
NVIDIA has introduced new digital human technologies designed to enhance customer interactions across various industries [2]. These advancements include:
These technologies are set to revolutionize customer service, entertainment, and education sectors by providing more engaging and personalized digital interactions.
NVIDIA is also making strides in the realm of physical AI, which combines artificial intelligence with robotics and other physical systems [3]. The company's efforts in this area include:
These advancements are expected to accelerate the development and deployment of AI-powered robots and autonomous systems across various industries.
Recognizing the potential of AI in digital marketing, NVIDIA has introduced new tools to enhance content creation and campaign management [4]. Key offerings include:
These tools aim to give marketers greater control over AI-generated content, enabling more efficient and creative marketing campaigns.
NVIDIA's latest AI innovations are poised to transform multiple sectors:
As NVIDIA continues to push the boundaries of AI technology, the company is positioning itself at the forefront of the AI revolution. With these latest announcements, NVIDIA is not only shaping the future of AI but also accelerating its adoption across industries, bringing us closer to a world where AI assistants are ubiquitous and transformative.
Reference
[1]
[2]
NVIDIA unveils NIM microservices, a groundbreaking technology that integrates generative AI into digital environments, enabling more realistic and interactive virtual experiences across various industries.
2 Sources
NVIDIA announces new generative AI models and NIM microservices to enhance OpenUSD, aiming to revolutionize 3D content creation, robotics, and digital twins across various industries.
4 Sources
NVIDIA has introduced a suite of new tools and technologies aimed at advancing the development of humanoid robots. These innovations are set to revolutionize the field of robotics, making it easier and faster for researchers and developers to create more sophisticated and capable humanoid machines.
3 Sources
Nvidia announces a range of new Nvidia AI Microservices (NIMs) to accelerate generative AI applications. The company also introduces industrial AI solutions and expands its partnership with Hugging Face for inference services.
3 Sources
NVIDIA introduces NIM Agent Blueprints, a new tool designed to simplify and accelerate the creation of AI-powered enterprise applications. This innovation aims to democratize AI development for businesses of all sizes.
5 Sources