Adobe has entered the generative AI market with the release of Firefly, a new family of AI models designed to integrate AI into Adobe’s suite of apps and services.
Firefly will be composed of multiple AI models that will work across various use cases and will be focused on generating media content. Firefly is the next step in Adobe’s AI journey, according to Alexandru Costin, VP of generative AI, combining “gentech” models with decades of investment in imaging, typography, illustration, and more to produce assets for customers’ workflows across Creative Cloud, Experience Cloud, and Document Cloud.
The Firefly beta version currently includes a single model capable of generating images and text effects from descriptions. The model, which was created using millions of photos, will soon be able to create content with a text prompt across Adobe apps such as Express, Photoshop, Illustrator, and Adobe Experience Manager. Users can access it via a website for the time being, and the company will soon announce its pricing structure.
Aside from basic text-to-image generation, Adobe’s first Firefly model can also apply styles or textures to lettering and fonts using user-supplied descriptions. According to the company, artwork created with Firefly models will include metadata indicating that it is partially or entirely AI-generated, meeting practical and legal requirements.
While some experts believe that training AI models with public images, even if they are copyrighted, may be covered by fair use doctrine in the United States, the issue is unlikely to be resolved anytime soon, especially given the contrasting laws proposed elsewhere. Adobe’s solution is to train Firefly models solely on content from Adobe Stock, the company’s royalty-free media library, as well as openly licensed and public domain content with expired copyright.
Adobe is also exploring a compensation model for Stock contributors, which would allow them to monetize their skills and benefit from any revenue generated by Firefly. Users will be able to train and fine-tune Firefly models using their content to steer the model’s outputs toward specific styles and design languages, according to the company. Adobe’s solution could be similar to Shutterstock’s recently launched Contributors Fund, which compensates creators whose work is used to train AI art models.
The sources for this piece include an article in TechCrunch.