Meta has released an open-source AI tool called AudioCraft that can create sound from text-based prompts. The tool is bundled with three models namely; AudioGen, EnCodec, and MusicGen. AudioGen is designed for creating sound effects based on a written description, EnCodec is a decoding engine, and MusicGen is designed for creating music from text.
Meta is making the code and model weights for AudioCraft available on GitHub. This will allow developers and researchers to experiment with the tool and contribute to its development.
AudioCraft is regarded as a significant advancement in generative AI. Previous advancements in generative AI have focused on text and image generation. AudioCraft, on the other hand, tackles the complex task of text-to-audio conversion. By training language models over their proprietary EnCodec neural audio codec, Meta has enabled AudioCraft to understand the associations between audio and text.
AudioCraft could be used to create realistic sound effects for video games, generate music for digital worlds, or even create new forms of art.
Meta is making AudioCraft available for research use, but it is yet announced any commercial applications for the tool.
The sources for this piece include an article in Axios.