OpenAI has unveiled three new voice AI models—gpt-4o-transcribe, gpt-4o-mini-transcribe, and gpt-4o-mini-tts—designed to streamline the addition of speech capabilities to applications. These models, accessible via OpenAI’s API, enable developers to incorporate speech-to-text and text-to-speech functionalities into their apps with minimal effort. citeturn0search1
Building upon the GPT-4o architecture introduced in May 2024, these models have undergone extensive post-training with specialized audio datasets to enhance their proficiency in transcription and speech tasks. OpenAI’s technical staff member, Jeff Harris, highlighted that this advancement offers improved accuracy and performance over the previous Whisper model, particularly in handling diverse accents and noisy environments. citeturn0search1
A notable feature of the gpt-4o-mini-tts model is its customizable voice outputs. Users can adjust accents, pitch, tone, and even convey specific emotions through simple text prompts, allowing for tailored and dynamic interactions within applications.
For individual users interested in exploring these capabilities, OpenAI has launched a demo site, OpenAI.fm, offering limited testing and interactive experiences with the new voice models.
These developments mark a significant step forward in making advanced speech functionalities more accessible to developers, paving the way for more interactive and personalized user experiences across various applications.