OpenAI’s first DevDay conference has come to a close, leaving developers and enterprises with an abundance of exciting new features and improvements.
The company didn’t just give away a special $500 in API credits to all attendees, but also announced a slew of new products and services that are poised to change the way we interact with AI.
One of the biggest announcements was the release of GPT-4 Turbo, an upgraded version of OpenAI’s flagship GPT-4 language model. With a 128k context window, GPT-4 Turbo can fit more than 300 pages of text into a single prompt, a significant improvement over the previous 32k context window. This means that developers can now build more complex and nuanced AI applications.
OpenAI also introduced the Assistants API, which allows developers to build agent-like interactions into their applications. The API currently supports Code Interpreter, Retrieval, and Function Calling. Code Interpreter allows developers to write code directly in their applications, while Retrieval allows them to access and process information from external sources. Function Calling allows developers to call functions from other applications.
OpenAI’s text-to-speech (TTS) feature was also upgraded at DevDay. Developers can now create human-quality speech from text using the TTS API. The API offers six preset voices across two model variants.
The sources for this piece include an article in AnalyticsIndiaMag.