The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Fri, 13 Sept, 8:04 AM UTC
5 Sources
[1]
Adobe's Firefly AI videos arrive only when it is safe, amidst an India push
It has been clear for a while now that text to video as a method will be the next significant chapter for generative artificial intelligence (AI), and while still most are in limited access stage, the pace at which tools are adopting realism makes these developments intriguing. Earlier in the year, OpenAI gave the world its first glimpse at Sora, a tool that used early demos to show off its realistic generations that at first glance would be difficult to identify as AI generations. As too, Runway's Gen-3 Alpha. Now, it is Adobe's turn to confirm that their Firefly platform will add what they call the Firefly Video Model, later this year. OpenAI still hasn't shared a timeline, though now they may, in the coming weeks. Adobe confirms that Firefly's generative video capabilities will find priority integration within the Premiere Pro platform, which marks up the company's belief that AI will be ready for professional workflows for video content and editing. "Just like our other Firefly generative AI models, editors can create with confidence knowing the Adobe Firefly Video Model is designed to be commercially safe and is only trained on content we have permission to use -- never on Adobe users' content," says Ashley Still is Senior Vice President & General Manager of Creative Product Group at Adobe. Also Read: Wired Wisdom: No escaping AI videos, but are we ready to answer tough questions about reality? The future of generative video is still unclear, in terms of how these tools will find wider acceptance and how they will handle often complex prompts. Realism in demos is great. Sora impressed us, and the potential looks no less with Firefly's video generation. Yet, those are extremely specific prompts to highlight capabilities -- and these prompts are not often so clear and crisp when users begin to use them. To that point, Adobe is pitching the editing capabilities too. Alongside, they believe the prompts on Firefly Video Models will be useful in filling gaps in a video edit by generating generic footage (also called b-rolls), and also secondary perspectives to a video that you share with Firefly. How about you looking at a skyline through binoculars, or a smartphone's video camera? Firefly Video Models will be able to a generate a video with that perspective. Also Read: As the world grapples with deepfakes, AI companies agree to a set of principles "With the Firefly Video Model you can leverage rich camera controls, like angle, motion and zoom to create the perfect perspective on your generated video," adds Ashley Still. There will be three pillars for this -- Generative Extend, Text to Video, and Image to Video, all hoping to find relevance for typical creatives and enterprise users' workflows. Adobe insists that Firefly Video Models only release as a beta test later this year, when they are "commercially safe". An essential element to this would be the placement of content credentials, an industry wide acceptance of labelling of AI generated content which HT has covered in detail earlier. This labelling is to differentiate generations from real videos or photos. With realistic video generations, as it already is with photos and audio (and often a mix of the two), it may be important to help distinguish between reality and the artificial to prevent misuse. Also Read: Exclusive | Most cutting-edge tools use AI built in-house: Canva's Cameron Adams Another aspect of this would be how video generation models handle creating human faces, which may or may not bear resemblance to actual, living people. These significant developments, as the tech company counters AI competition as well as battles creative workflow platforms, fructify ahead of Adobe's annual developer conference, MAX, next month. A development that we must note is, Google in their new generative tools that use the Gemini models (these tools are also available on the new Pixel 9 phones), clearly do not generate human faces based on any prompts, and also do not magic edit photos with human faces -- it will not change the perspective of the background in a photo which has my face, and a friend's face. On the other hand, if it is an object such as a car for instance, you can create backdrops which may make it seem as though you've parked it with the New York skyline or the Kensington Palace in the background. Also Read:Fight fire with fire? Gen AI as a defence against AI powered cyberattacks Adobe has also added support for eight Indian languages for its versatile editing platform, Adobe Express. This should strengthen the platform's relevance in India as a market, as competition with Canva's Magic Studio increases. "With millions of active users, Adobe Express is seeing rapid adoption in India, and we're excited to double down on this diverse market's fast expanding content creation requirements by introducing user-interface and translation features in multiple Indian languages," says Govind Balakrishnan, Senior Vice President, Adobe Express and Digital Media Services The company confirms that Express on the web will support Hindi, Tamil and Bengali. At the same time, the translate feature will support Hindi, Bengali, Gujarati, Kannada, Malayalam, Punjabi, Tamil, and Telugu. Canva too, earlier this year, added support for multiple Indian and global languages to its suite, including translating content as well as generations, also aimed at teams and business users.
[2]
Adobe's Firefly AI Hits 12 Billion Generations, Previews Video Creator | PYMNTS.com
Adobe's Firefly Services are at the forefront of AI-driven innovation, now reaching a significant milestone of 12 billion generations. This achievement highlights Adobe's strategic focus on enhancing its Creative Cloud and Document Cloud platforms, showcasing how its AI technologies are reshaping content creation and document workflows with efficiency and creativity. During the company's third-quarter earnings conference call with analysts and investors Thursday (Sept. 12), Adobe president and CEO Shantanu Narayen said Adobe's customer-centric approach to AI is "highly differentiated across data, models and interfaces. We train our Firefly models on data that allows us to offer customers a solution designed to be commercially safe." Adobe has released Firefly models for imaging, vector and design, and previewed a new Firefly Video model. "Our greatest differentiation comes at the interface layer with our ability to rapidly integrate AI across our industry-leading product portfolio, making it easy for customers of all sizes to adopt and realize value from AI," Narayen explained. "We're delighted to see customer excitement and adoption for our AI solutions continue to grow and we have now surpassed 12 billion Firefly-powered generations across Adobe tools." Third-quarter revenue rose 11% to $5.41 billion while digital experience subscription revenue grew 12% to $1.23 billion. "We're bringing together content creation and production, workflow and collaboration and campaign activation and insights across Creative Cloud, Express and Experience Cloud," Narayen said. "New offerings including Adobe GenStudio and Firefly Services empower companies to address personalized content creation at scale with agility and enable them to address their content supply chain challenges." Firefly-powered features in Adobe Photoshop, Illustrator, Lightroom and Premiere Pro help amplify users' creative potential and increase productivity. Adobe Express offers a streamlined, versatile platform for effortless content creation, enabling millions to unlock their creative capabilities. Acrobat's AI Assistant enhances the value extracted from PDF documents, while Adobe Experience Platform's AI Assistant helps brands automate workflows and cultivate new audiences and customer journeys. Additionally, Adobe GenStudio merges content and data, combining rapid creative expression with enterprise-level personalization to meet scale needs effectively. David Wadhwani, chief business officer, digital media at Adobe, said during the earnings call that, for decades, PDF has been the "de facto standard for storing unstructured data, resulting in the creation and sharing of trillions of PDFs. The introduction of AI Assistant across Adobe Acrobat and Reader has transformed the way people interact with and extract value from these documents." In the third quarter, Wadhwani said Adobe released significant advancements, including the ability to have conversations across multiple documents and support for different document formats, saving users valuable time and providing important insights. "We're thrilled to see this value translate into AI Assistant usage, with over 70% quarter-over-quarter growth in AI interactions," he noted. In addition to consumption, Wadhwani added: "We're focused on leveraging generative AI to expand content creation in Adobe Acrobat. We've integrated Adobe Firefly image generation into our Edit PDF workflows. We've optimized AI Assistant in Acrobat to generate content fit for presentations, emails and other forms of communication. And we're laying the groundwork for richer content creation, including the generation of Adobe Express projects. The application of this technology across verticals and industries is virtually limitless." Tata Consultancy Services recently used Adobe Premiere Pro to transcribe hours of conference videos and then used AI Assistant in Acrobat to create digestible event summaries in minutes, Wadhwani noted. "This allowed them to distribute newsletters on session content to attendees in real time," he said. "We're excited to leverage generative AI to add value to content creation and consumption in Acrobat and Reader in the months ahead. Given the early adoption of AI Assistant, we intend to actively promote subscription plans that include generative AI capabilities over legacy perpetual plans that do not." When consumers use Adobe's generative features, "they retain better," Wadhwani said. "We're also seeing that when people come to Adobe to try our Creative Cloud applications or Express application, they're able to convert better. There are all these ancillary implied benefits that we're getting. But in terms of direct monetization, what we've said in the past is that the current model is around generative credits. We do see with every subsequent capability we integrate into the tool, total credits consumed going up. Now, what we are trying to do as we go forward, we haven't started instituting the caps yet. And part of this is, as we've said all along, we wanted to really focus our attention on proliferation and usage across our base." Wadhwani said leadership was "watching very closely as the economy of generative credits evolves. And we're going to look at instituting those caps at some point when we feel the time is right and/or we're also looking at other alternative models." Wadhwani also added that company officials "see the opportunity to engage very deeply in the monetization. But we want to play it out over time and proliferation continues to be our primary guide."
[3]
Adobe Pushes Into Generative Video With Firefly Amid AI Controversy - Decrypt
Adobe announced a significant expansion of its Firefly generative AI platform, introducing video creation and editing capabilities slated for release later this year. The new Firefly Video Model positions Adobe to compete directly with emerging players in the generative video space, including OpenAI's Sora. Set to debut in beta form, the video expansion to the Firefly tool will integrate with Adobe's flagship video editing software, Premiere Pro. This integration aims to streamline common editorial tasks and expand creative possibilities for video professionals. The model boasts several notable features, including the capacity to generate B-roll footage from text prompts, with Adobe asserting that high-quality clips can be produced in under two minutes. This capability mirrors the pure video generation offered by platforms like Sora, Kling, or Dream Machine. Another new tool, Generative Extend, enables editors to lengthen existing clips, smoothing transitions and adjusting timing to align perfectly with audio cues. Moreover, the AI can address video timeline gaps, helping to resolve continuity issues in editing by contextually connecting two clips within the same timeline -- a feature that distinguishes Adobe from its competitors. The Firefly Video Model also incorporates the ability to eliminate unwanted elements from footage, akin to Photoshop's content-aware fill. Adobe says its generative AI technology edits each frame and maintains consistency throughout the timeline, turning a typically slow, manual process into a faster, automated one. Additionally, the model can produce atmospheric elements like fire, smoke, and water, thereby enhancing video compositing options. While not revolutionary, this capability does add flexibility to Adobe's video editing suite. Also, just like other existing generative video tools, Firefly supports various camera movements and angles. The samples shared in the announcement show a pretty powerful model, capable of understanding the context and providing coherent generations. Finally, Adobe emphasizes that Firefly is "commercial safe" -- trained exclusively on licensed content, mitigating potential copyright concerns. This may be a strategic move considering that Adobe's foray into generative AI has been rocky -- to put it mildly. When the company first rolled out AI features in Photoshop, reactions were somewhat mixed, with some creatives seeing potential and others being more skeptical. But then Adobe stepped in it big time. A license change appeared to give Adobe the green light to use customer data, and all hell broke loose. Content creators, from YouTube stars to industry analysts, raised their voices against the company -- even recommending ditching the suite in favor of less popular (but more pro-creator, anti-AI) competitors. Those competitors smelled blood in the water. Procreate's CEO didn't mince words, declaring he "fucking hated AI" and swearing that tech would never reach their app. Affinity swooped in and assured its users that there wouldn't be generative AI on their suite of products either. Adobe scrambled to patch things up, tweaking their terms of service, but the damage was done. Their reputation took a beating, especially given the groundswell of anti-AI sentiment in creative circles. Despite the PR nightmare, Adobe remains firm in its pro-AI vision. Just a few weeks ago, the company introduced Magic Fixup, a technique that applies more sophisticated image editing capabilities than normal image editors after being trained on video instead of still images. The company has opened a waitlist for the Firefly Video Model beta, though specific release dates have not been announced.
[4]
Adobe previews Firefly's new gen AI enhanced 'Video Model' for on-demand clip creation
Adobe Firefly will soon be able to create short videos with the goal of helping creators achieve their vision. Adobe Firefly will receive a new "Video Model" later this year allowing the artificial intelligence (AI) platform to generate short videos on demand. The company even made a teaser trailer demonstrating what it's capable of. According to the announcement, the new model consists of two main parts. First, a "Text to Video" component, lets users create clips via text prompt. You can instruct the AI to include details like specific camera angles and zoom-in speed. Also: The best AI image generators to try right now The second is Image to Video. This takes uploaded still images or illustrations and introduces new animations transforming them into moving clips. "Text to Video" and "Image to Video" will be rolling out later this year to the official Adobe Firefly website. There is a third tool called "Generative Extend" although it'll be exclusive to Adobe Premiere Pro. It fills in the gaps between two pieces of footage with generated, contextually aware content. "Generative Extend" will also be released in the coming months but as a beta. However, no word on when the final version will launch. The language surrounding this reveal is quite interesting. Unlike OpenAI's Sora which aims to make proper films, Firefly's new model aims to be more of a support tool. Adobe wants it to enrich a creator's "ability to tell beautiful and compelling stories." Throughout the trailer and post, the company frequently mentions how the AI is "commercially safe." It was trained on either "public domain or licensed content -- never user content." The issue of copyright infringement has been a major sticking point for generative AI critics. They argue this tech could be used to infringe on a creator's work by taking content without permission or producing derivatives. It seems Adobe is aware of these problems, so it is making it abundantly clear that they're not taking anyone's work without permission. Also: Adobe will let you create AI-generated images in your PDFs - for free. The company has yet to say when the final version of Firefly's "Video Model" will roll out. But if you're interested, you can sign up to receive a notification when it becomes available on Adobe's website. There are even a couple of AI-made samples for you to check out.
[5]
Adobe: Firefly AI Trained on Licensed Content, Not User Data
Adobe has assured its users that it believes in developing artificial intelligence (AI) responsibly. In a newsletter sent to its users on September 10, the company's president David Wadhwani emphasized that it only trains its family of AI models, Adobe Firefly, on licensed content. He also mentioned that the company compensates Adobe Stock contributors for the use of their content to train Firefly. "Unfortunately, not all companies prioritize creators' intellectual property this way, but we believe this is the right way," he argued. This comes after the company caught flak for updating its terms of service in June, which suggested that the company may access, view or listen to user content "through both automated and manual methods." It assured that it would only do this in limited ways as applicable by law. It also said that for Adobe to operate or improve its Services and Software, users grant it a license " to use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, and translate the Content." At the time, many users argued that Adobe could use their data to train their AI models through these terms. The company then clarified that users own their content and Adobe would not use it to train generative AI models. In the recent newsletter, Wadhwani directed users to Adobe Firefly's webpage which reiterates that the company was not relying on user data for training its models. It also mentions that the company doesn't mine or scrape content from the web for training data. The company plans to release new generative AI features, allowing customers to easily adjust existing Stock content in real-time to match their needs. Adobe expects that this will allow it to "increase license conversion and create a virtually unlimited catalogue of exceptional content." This, in turn, would increase the stock contributor's earning potential. The company announced that stock contributor earnings have reached an all-time high. It announced a second round of bonuses for creators whose content was used to train the Firefly AI models between June 2023-24. Additionally, "any Generative AI content uploaded within the same time frame and used for training will be included in the bonus," the company said. This means that creators on Adobe stock also earn from uploading AI-generated images to the service. Adobe updated its policies for submitting generative AI content to Adobe Stock, stating that contributors cannot use titles implying that the content depicts a newsworthy event. This comes after the platform was found selling AI-generated images of the Israel-Palestine conflict in November last year. At the time, bad actors were using AI-generated images to spread propaganda about the conflict. The company says that it has also updated its content usage policies to prohibit the editorial use of Adobe Stock content cannot be used to mislead or deceive people. Adobe added that it is investing in both content moderation technology and staff. "This expanded auditing has resulted in the reclassification and removal of over 100K assets," the Adobe Stock policy update says. The company is using new machine learning classifiers to identify when someone uses an artist's name in a way that violates its policies. To ensure higher accuracy, Adobe will now also allow users to report such content. The company said that it is also advocating for legislations like the Federal Anti-Impersonation Right (FAIR) Act to help protect artists from the harmful and unfair use of AI to replicate their work. In its earnings call for the second quarter of 2024, the company mentioned that it is leveraging Firefly to increase engagement in Adobe's flagship products. During the call, Wadhwani mentioned that the company wants to add more users to its products via AI and then migrate existing users to higher-priced plans to get access to more AI features. The company argues that the real benefit of AI is allowing people to perform tasks quicker and faster, and doing so by embedding it into the workflows that they're accustomed to.
Share
Share
Copy Link
Adobe introduces generative AI video capabilities to Firefly, reaching 12 billion generations. The company addresses ethical concerns and emphasizes safety measures while expanding its AI offerings in India.
Adobe, the software giant known for its creative tools, is making significant strides in the artificial intelligence realm with its Firefly AI. The company recently announced the expansion of Firefly's capabilities to include generative video, marking a pivotal moment in the evolution of AI-powered content creation 1.
Firefly AI has seen remarkable adoption since its launch, with the platform achieving an impressive milestone of 12 billion generations 2. This rapid growth underscores the increasing demand for AI-generated content across various industries and creative domains.
The introduction of video generation capabilities to Firefly AI represents a significant leap forward. Adobe's Chief Technology Officer, Ely Greenfield, emphasized that the company is taking a cautious approach to ensure the technology is safe and reliable before its full release 4. This move positions Adobe to compete with other tech giants in the burgeoning field of AI-generated video content.
As Adobe pushes into new AI territories, it faces scrutiny regarding the ethical implications of its technology. The company has been proactive in addressing these concerns, particularly regarding the data used to train its AI models. Adobe has clarified that it uses only licensed content to train Firefly AI, aiming to alleviate fears of copyright infringement and unauthorized use of artists' work 5.
Adobe is not only focusing on technological advancements but also on geographical expansion. The company is making a significant push into the Indian market, recognizing the country's vast potential for AI adoption and creative innovation 1. This strategic move could open up new opportunities for Adobe in one of the world's fastest-growing digital economies.
The expansion of Firefly AI comes at a time when the AI industry is facing increased scrutiny and controversy. Adobe's approach of prioritizing safety and ethical considerations in its AI development is a response to the growing concerns about the potential misuse of generative AI technologies 3. By emphasizing transparency and responsible AI practices, Adobe aims to differentiate itself in a competitive and contentious market.
As Adobe continues to develop and refine its AI offerings, the implications for creative industries are profound. The ability to generate high-quality video content on-demand could revolutionize fields such as marketing, entertainment, and education. However, it also raises questions about the future role of human creators and the potential impact on traditional content creation processes.
Reference
[1]
Adobe announces the addition of AI-generated video capabilities to its Firefly platform, positioning itself as a competitor to OpenAI's Sora in the rapidly evolving field of artificial intelligence-driven content creation.
22 Sources
Adobe's Firefly AI tool is set to introduce video generation capabilities, marking a significant advancement in AI-powered creative software. This development comes as Adobe continues to refine its approach to AI tool development and deployment.
2 Sources
Adobe launches Firefly AI video creator, offering businesses a tool for efficient, copyright-safe content production. The move could revolutionize corporate media strategies and democratize video marketing.
29 Sources
Adobe introduces AI-powered features across its Creative Cloud suite, emphasizing the need for artists to adopt AI tools to remain competitive in the evolving creative landscape.
4 Sources
Adobe introduces innovative AI-powered tools to measure the impact of AI-generated content and enhance personalization. These tools aim to help marketers and content creators optimize their strategies in the evolving digital landscape.
2 Sources