The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 26 Sept, 8:27 AM UTC
10 Sources
[1]
Meta AI on WhatsApp gets voice and photo editing tools
At Meta Connect 2024 on Wednesday, Meta introduced new updates to WhatsApp that enable users to interact with Meta AI using their voices and photos. According to Meta, these updates aim to "make it easier for more people" to share their ideas, enhance their conversations, and explore new experiences. Talk to Me: Users can now speak directly to Meta AI, which will respond verbally. By pressing the waveform button, you can ask questions, and the AI will provide explanations. This feature will gradually introduce different voice options, including voices of various celebrities. Look at This: Users can send photos to Meta AI for information. For instance, if you photograph a menu in another language, you can ask the AI to translate it. Additionally, you can take a picture of a plant and inquire about its care. Edit My Photo: Meta AI now offers photo editing capabilities, allowing users to add, remove, or modify elements in their pictures. You can eliminate unwanted people from a background or change the color of an object to visualize it differently. Furthermore, Meta is expanding its business AIs, aiming to improve customer service for users interacting with businesses. They confirmed that they are starting with thousands of businesses using the WhatsApp Business app, with plans to broaden these services over the next year. Announcing the updates, Meta posted:
[2]
Meta AI Gets a Huge Upgrade; Voice Chat and AI Photo Editing are Here
With the release of new Llama 3.2 multimodal models -- 11B and 90B -- Meta has unlocked new use cases for its Meta AI chatbot. At the Meta Connect 2024 event, the company announced several new features for Meta AI that allow users to interact with various modalities like audio and images. First and foremost, you can now talk to Meta AI using voice and it will reply out loud. You can continue the conversation and ask questions on any topic. The best part is that it can find even the latest information by browsing the internet. It's not as conversational as Gemini Live and ChatGPT Advanced Voice, but you get a standard two-way voice chat interface. There is no support for interruptions, though. The timing of this announcement couldn't be any better as ChatGPT Advanced Voice Mode started rolling out to users today. Meta Voice chat is available through the Meta AI chatbot on WhatsApp, Facebook, Messenger, and Instagram DM. There are different AI voices available and you can even choose the voice of public figures such as John Cena, Keegan Michael Key, Awkwafina, Dame Judi Dench, and Kristen Bell. Since Meta AI is now powered by Llama 3.2 11B and 90B multimodal models, you can upload an image and ask Meta AI to analyze it. For instance, you can upload an image of a mountain, ask where it is located, and find more information along the way. You can also choose to upload charts and diagrams and infer meaning from your visual input. Next, Meta AI brings AI photo editing to its social media apps. You can upload an image and ask the AI chatbot to change the background, erase unwanted objects, change outfits, and much more. Basically, AI photo editing is now readily available on Meta's social stack, including WhatsApp, FB Messenger, Instagram, and Facebook. It works similarly to Google's Magic Editor, but it's available within your social media apps and you can seamlessly share them as stories. Best of all, the Reimagine AI tool now lets you create AI-generated images of yourself. You can reimagine your photos from feed, stories, and Facebook profile pictures by simply adding a prompt, and Meta AI will instantly generate an image based on your prompt. It means that you don't have to train your images using LoRA to create AI-generated images of yourself. My colleague and boss Devinder is in Palo Alto attending Meta Connect 2024. He got a chance to go hands-on with this new Meta AI capability in WhatsApp and generate some cool photos. Last but not least, one of the promising features of Meta AI is the automatic translation of Reels. If a creator has published a Reel in a foreign language that you don't understand, Meta AI will translate the audio into your language automatically with perfect lip-syncing. Currently, the feature is limited to Latin America and the US in English and Spanish. Meta says the feature will be expanded to more regions and languages pretty soon. Next, Facebook and Instagram users may see Meta AI-generated images in the user feed based on user interest or current trends. You can also tweak the prompt to generate new content in your feed. And finally, users will be able to personalize themes using AI in their private DMs. So these are the new Meta AI features coming to WhatsApp, Instagram, Facebook, and Messenger. Are you excited to check them out? Let us know in the comments below.
[3]
Meta AI can now talk to you and edit your photos
Over the last year, Meta has made its AI assistant so ubiquitous in its apps it's almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful. One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year's Meta AI launch, the company tapped a group of celebrities for the change. Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI's new abilities, it's worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year's Connect. In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item. The new abilities arrive alongside the company's latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can "bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story." Llama 3.2 is "competitive" on "image recognition and a range of visual understanding tasks" compared with similar offerings from ChatGPT and Claude, Meta says. The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with "automatic dubbing and lip syncing." According to Meta, that "will simulate the speaker's voice in another language and sync their lips to match." It will arrive first to "some creators' videos" in English and Spanish in the US and Latin America, though the company hasn't shared details on rollout timing. Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users' interests and past activity. For example, Meta AI could surface an image "imagined for you" that features your face.
[4]
Meta AI Can Now See and Speak to You
Katelyn is a writer with CNET covering social media, AI and online services. She graduated from the University of North Carolina at Chapel Hill with a degree in media and journalism. You can often find her with a novel and an iced coffee during her time off. Today's Meta Connect event was chock full AI news, including AI integrations to the new Meta Quest S3 mixed reality headset and its virtual assistant Meta AI. Meta AI is getting new sight and voice abilities, including celebrity voices and photo editing tools. Meta is also testing integrating AI-generated content into Instagram and Facebook feeds, specifically tailored to users' interests. You'll be able to use Meta AI's voice on its social platforms: Instagram, Facebook, WhatsApp and Messenger. Meta said that the new voice feature is beginning to roll out today in the US, Canada, New Zealand and Australia. If you can't access the feature yet, don't panic -- the rollout will continue for the next month. Meta tapped a few celebrities to lend their voices and help bring some humanity into its AI voice. You can choose between the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key, and Kristen Bell for your Meta AI. Meta AI is also getting upgraded visual tech. Now when you upload a photo to Meta AI, it can tell you what's in them, like identifying a specific type of animal or helping break down a recipe. Meta also said you can use its AI assistant to help edit photos, including removing background elements, adding new elements and changing the background. Instagram is also getting some AI attention, as Meta said it is using AI to improve video dubbing and lip syncing so its Reels can be translated and accessible in more languages outside of its original post. Meta also said it's testing "imagined" -- meaning AI-generated -- content into Instagram and Facebook feeds. Meta said the images will be tailored to individual users' interests. Meta currently adds a label to AI-generated content. This is a developing story. Be sure to check back for more updates.
[5]
Meta AI can now understand and edit your photos | TechCrunch
Meta AI is starting to catch up with Google when it comes to AI-powered photo editing. On Wednesday, at Meta Connect 2024 conference, the tech giant announced that Meta AI will now be able to help you edit photos using AI technology as well as answer questions about the photos you share. The additional features are made possible because Meta AI is gaining multimodal capabilities, powered by its Llama 3.2 models. This means you can now share photos in your chats, not just text, similar to Google Gemini and OpenAI's ChatGBT. When sharing a photo, Meta AI can understand what the image contains and answer questions about the image. For example, Meta suggests you could share a photo of a flower and then ask the AI what type of flower it is. Or you could share a photo of a delicious dish and ask Meta AI how to make it. Of course, how accurately Meta AI responds to these and other questions still needs to be tested and reviewed. Another key feature with the added photo support is the ability to edit images using AI. After sending Meta AI the photo, you can ask it to make some sort of change -- like adding or removing an object in the foreground, changing your outfit, or updating the background of the photo in some way, like adding a rainbow to the sky, for instance. Meta AI can also be used on Instagram when you reshare a photo from your feed to your Instagram Stories. Here, the AI technology can look at the photo, understand the images, then generate an accompanying background for your Story. Beyond photo edits, Meta is also testing translation tools for Facebook and Instagram Reels that include automatic dubbing and lip-syncing. These tests will initially be run in small groups in the U.S. and Latin America in both English and Spanish. Other Meta AI photo features include the expansion of Meta AI's generative AI features and the rollout of tests of Meta AI images shared to your Facebook and Instagram feeds, to prompt users to try the feature.
[6]
Meta AI gets a bunch of free upgrades: Voice, vision and auto-dubbing
In the race to make truly useful AI for a mass audience, Meta just jumped forward a few key steps -- including AI's ability to "see" objects and provide live, lip-synched translations. At the Meta Connect developers' conference, CEO Mark Zuckerberg unveiled the latest version of Llama. That's the open-source Large Language Model (LLM) powering the AI chatbot in the company's main services: Facebook, WhatsApp, Messenger, and Instagram. Given that reach, Zuckerberg described Meta AI as "the most-used AI assistant in the world, probably," with about 500 million active users. The service won't be available in the European Union yet, given that Meta hasn't joined the EU's AI pact, but Zuckerberg said he remains "eternally optimistic that we can figure that out." He's also optimistic that the open-source Llama -- a contrast to Google's Gemini and OpenAI's GPT, both proprietary closed systems -- will become the industry standard. "Open source is the most cost-effective and the most customizable," Zuckerberg said. Llama is "sort of the Linux of AI." But what can you do with it? "It can understand images as well as text," Zuckerberg added -- showing how a photo could be manipulated simply by asking the Llama chatbot to make edits. "My family now spends a lot of time taking photos and making them more ridiculous." Voice chat is now rolling out to all versions of Meta AI, including voices from celebrities such as Judi Dench, John Cena and Awkafina. Another user-friendly update: When using Meta AI's voice assistant with its glasses, you no longer have to use the words "hey Meta" or "look and tell me." Zuckerberg and his executives also demonstrated a number of use cases. For example, a user can set up Meta AI to provide pre-recorded responses to frequently asked questions over video. You can use it to remember where you parked. Or you can ask it to suggest items in your room that might help to accessorize a dress. The most notable, and possibly most useful feature: live translation. Currently available in Spanish, French, Italian and English, the AI will automatically repeat what the other person said in your chosen language. Zuckerberg, who admitted that he doesn't really know Spanish, demonstrated this feature by having an awkward conversation live on stage with UFC fighter Brandon Moreno. Slightly more impressive was the live translation option on Reels, and other Meta videos. The AI will synchronize the speakers' lips so they look like they're actually speaking the language you're hearing. Nothing creepy about that at all.
[7]
I spoke to Meta AI and, yes, it can carry on a conversation
Face it, no AI is complete until it has a voice, and now Meta AI has one and is ready to engage with you on your favorite Meta platform, including WhatsApp, Facebook, Instagram, and Messenger. I had a chance to try Meta's new Llama-powered chatty capabilities at Meta Connect 2024, which is taking place September 24-26 at the iconic 1 Hacker Way Meta headquarters. Admittedly, the conditions were suboptimal. Meta AI was on a phone out on the Meta Campus, where people milled about and helicopters buzzed overhead. A Meta employee told me the phone's mic was on, and Meta was listening, and I stood before it talking. "So I can just talk to it?" I asked. Meta piped up, "Yes you can. I'm here to help. Just say what's on your mind, and I'll do my best to assist you. Go ahead." Suddenly flummoxed, I couldn't think of a question, so I asked Meta AI if it knew where New York City is," and it immediately gave me a detailed answer. As Meta AI talked, I interrupted and told it I was thinking of moving there, but I didn't know the best place. "That's exciting," Meta AI responded and began outlining the city's five Burroughs. I interrupted again and told Meta AI I was considering Manhattan. Without missing a beat, meta AI told them Manhattan features diversity. Everything Meta AI said also appeared in text on the screen. I asked Meta AI if it thought I could get a condo in Harlem for under $500,000. To my surprise, it said yes and gave me detailed examples. At this point, there was a bit too much sound interference, and Meta AI did not hear me when I asked about a moving company or when I asked it to stop responding. It really seemed to enjoy going through Harlem condo opportunities. By turning off the speaker for a second, we were able to regain control of Meta AI, which quickly gave me some moving company capabilities. Even with that glitch at the end, this was an impressive little demo. Meta AI's speech capabilities are smart, understand context, and can pivot if you interrupt. Meta AI's speech capabilities are rolling out now in the US, Canada, Australia, and New Zealand. It can chat, tell stories, and figure things out, like where to move and how to find a home within your price range.
[8]
Facebook and Instagram can now use AI to answer questions about your photos and edit them too
Mark Zuckerberg has announced a ton of cool AI features at Meta Connect 2024 which apply to Meta AI, the chatbot found inside its popular social media apps like Facebook, Instagram DM, WhatsApp, and Messenger. One of the coolest new features he revealed is the ability to ask questions about photos and edit them in Meta AI. Meta AI is now multimodal, which means it can 'see' photos in your chats and answer questions about them. So, you can simply post an image of a bird in your chat with Meta AI and ask it what kind of bird you're looking at and you'll get the right answer. But it goes beyond simply identifying animals and plants. Show it a photo of a meal and ask it how you'd make it and you'll get a list of instructions as well as ingredients. One of the most fun things you can do with AI is to manipulate your photos. This doesn't just involve adding an Instagram filter or changing brightness levels. With AI you can do things with your photos that wouldn't otherwise be possible, like add or remove elements or change the background. Even more impressively, Meta AI can now edit your photos, too. That means that if you want to remove somebody from a photo you can get Meta AI to do it for you. Or maybe you just want to change the background to something else? No problem, Meta AI can do that too. It can even add things to your photos, so if you want to be standing next to a lion, you can make it happen without risking your life. If you want to see what you'd look like in a different outfit then Meta AI is your new fashion consultant. Plus, if you want to reshare a photo from your Instagram feed to your Instagram Story, Meta AI's new backgrounds feature is on hand to intelligently pick a background that will go with the image for your story. We expect the ability to see and edit photos to roll out to Meta AI users in the US, Canada, Australia, and New Zealand over the next month.
[9]
MetaAI Voice is the latest voice assistant to launch -- here's how it stacks up
Unlike Siri and Alexa, MetaAI Voice sits firmly in the conversational category and there is a good reason for that -- the company needed a better way for people to interact with its Ray-Ban smart glasses, Quest VR headsets, and general devices without access to a keyboard or touch screen. Conversational AI voice allows you to talk to the AI in natural language as if you were talking to a human. It allows it to handle complex and vague queries. For example, in the Meta Connect Demo Mark Zuckerberg suggested holding an avocado up to the Meta Ray-Ban smart glasses and saying "What can I make with this?" without specifying the nature of "this". Meta has done something Google and OpenAI haven't, though. It offers up the voices of the famous instead of an unnamed actor or generated voice. Initially, you'll be able to converse with an AI that sounds like Dame Judi Dench, John Cena, Kristen Bell, and more. Unfortunately, the quality of synthetic voice isn't up there with Gemini or ChatGPT Voice but you can interrupt it mid-flow and ask it the same level of natural queries. It is accessible on WhatsApp, Facebook Messenger and Instagram. While MetaAI Voice might be less realistic and natural than ChatGPT Advanced Voice, the one thing it has in its favor is the Meta ecosystem. More than three billion people around the world use at least one of Meta's core products every day. MetaAI has over 400 million active monthly users and it is only really available in the States. The text-based version is there within all the core products and looks the same whether you open it in WhatsApp, Instagram, Facebook or Messenger. Right now you can use it to generate images, have a text-based conversation and even play games. With voice, you'll be able to leave it on the desk and chat away as you go about other tasks. MetaAI also now uses Llama 3.2 90b as its "brain". This is a new multimodal model from Meta that can analyze images as well as text. It is likely future versions will also be able to work with more sounds, documents and even video -- if it matches the progress of OpenAI's models. This means that, at the touch of a button in any of the apps you use everyday, you'll be able to start talking to an AI. You'll be able to give it a photo you've just taken, ask it for details of the image or to change an aspect of the image such as removing an unsightly trash can. The real power of MetaAI Voice will be felt by those wearing the Ray-Ban Smart Glasses or a Quest headset. These devices will be able to see the world as you do and allow you to talk to the AI about anything you see in real-time.
[10]
How Meta's AI Advancements May Impact Social Commerce | PYMNTS.com
Meta's latest AI upgrades, unveiled at its annual Connect conference, could change online shopping through voice-activated assistants and image-recognition technology on social media platforms. The tech giant reported that over 400 million people use Meta AI monthly, with 185 million engaging weekly across its products. Meta claims its AI assistant will become the most used globally by the end of the year. "AI-generated images and captions can supercharge social media marketing. Brands can make content that feels custom-made for each user, at a large scale," Mike Vannelli, an industry expert, told PYMNTS. He added, "AI can analyze what users like and help businesses make targeted campaigns. This leads to more engagement and better returns on investment." Meta announced Llama 3.2, a big advancement in its open-source AI model series, alongside its consumer-facing updates. This new release includes small and medium-sized vision language models (11B and 90B parameters) and lightweight, text-only models (1B and 3B parameters) designed for edge and mobile devices. The vision models can analyze images, understand charts and graphs, and perform visual grounding tasks. The lightweight models, optimized for on-device use, support multilingual text generation and tool-calling abilities, enabling developers to build personalized applications supposedly prioritizing user privacy. New features include voice interaction capabilities. Users can now talk to Meta AI on Messenger, Facebook, WhatsApp and Instagram DM and get spoken responses. Meta is rolling out various voice options, including AI voices of celebrities like Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key, and Kristen Bell. Owais Rawda, senior account manager at Z2C Limited, told PYMNTS, "Voice-interactive AI creates a more personal customer experience. It gives quick answers, making shopping easier." Users can also share photos with Meta AI for analysis and editing. The AI can identify objects in images, answer questions, and edit pictures on command. For example, users can ask Meta AI to identify a flower in a hiking photo or get cooking instructions for a dish they've photographed. Meta AI's new editing capabilities allow users to request photo changes, from altering outfits to replacing backgrounds. The company is also testing an AI translation tool for Reels that automatically translates audio and synchronizes lips in videos, starting with English and Spanish. Meta is expanding its business AI tools to companies using click-to-message ads in English on WhatsApp and Messenger. These AI agents can chat with customers, offer help, and assist with purchases. The company said ad campaigns using AI features got 11% more clicks and 7.6% more conversions than regular campaigns. Over a million advertisers are using these tools, making 15 million ads in the past month. Vannelli highlighted changes in customer service: "Meta AI makes shopping smoother. Customers don't have to switch between pages or wait for a human to respond." Meta is also enhancing its Imagine feature, allowing users to create AI-generated images of themselves as superheroes or in other scenarios directly in their feeds, Stories and Facebook profile pictures. These images can be easily shared and replicated by friends. As Meta refines its AI offerings, its approach to data use, transparency, and user control will be crucial in shaping the adoption and success of these new features. These AI advances represent a big step in Meta's strategy to integrate AI into its core products, potentially reshaping how businesses and consumers interact in the digital marketplace.
Share
Share
Copy Link
Meta introduces significant upgrades to its AI assistant, including voice interaction capabilities and advanced photo editing tools across its social media platforms. These new features aim to enhance user experience and creativity.
Meta has introduced a groundbreaking feature allowing users to engage in voice conversations with Meta AI across its platforms, including WhatsApp, Facebook, Messenger, and Instagram [1][2]. This new capability enables users to ask questions verbally and receive spoken responses, making interaction more natural and accessible [1].
The voice feature comes with a variety of AI voices, including those of celebrities such as John Cena, Keegan Michael Key, Awkwafina, Dame Judi Dench, and Kristen Bell [2][3]. This addition aims to personalize the user experience and make interactions more engaging.
Meta AI now boasts enhanced visual capabilities, powered by the new Llama 3.2 multimodal models [2][5]. Users can upload images for analysis, with Meta AI providing information about the content, such as identifying locations, objects, or even translating text within photos [1][2].
The AI assistant also introduces sophisticated photo editing tools across Meta's social media apps [2][3]. Users can request changes to their images, such as:
This feature is comparable to Google's Magic Editor but is conveniently integrated within Meta's social media ecosystem [2].
Meta is testing the integration of AI-generated images directly into Facebook and Instagram feeds [3][4]. These images will be tailored to individual users' interests and past activity, potentially showing up as "imagined for you" content [3][4]. This feature aims to enhance user engagement and provide personalized content experiences.
An innovative feature in development is the automatic translation of Reels content [2][3]. This technology will use AI to dub videos in different languages while maintaining lip-sync, making content more accessible across language barriers [2][3]. The feature is currently being tested in Latin America and the US, supporting English and Spanish, with plans for expansion to more regions and languages [2].
These updates are part of Meta's broader strategy to enhance its AI assistant's capabilities and integration across its platforms. The company is also expanding its business AI services, aiming to improve customer service for users interacting with businesses on WhatsApp [1].
As Meta continues to develop and roll out these features, they represent a significant step forward in the company's AI offerings, potentially changing how users interact with content and each other across Meta's social media landscape.
Reference
[3]
WhatsApp is testing a new feature in its beta version that leverages Meta's AI capabilities to allow users to edit photos sent by others in chats. The feature aims to provide an upgraded photo editing experience within the messaging app.
7 Sources
Meta introduces 'Imagine Me', an AI-powered feature that allows users to generate personalized AI images of themselves in various scenarios and styles. This tool expands on Meta's existing AI image generation capabilities.
9 Sources