The European Union recently enacted the EU AI Act, mandating that foundational AI models, including those by OpenAI, adhere to stringent transparency requirements before market introduction. OpenAI, categorized as high-risk under the EU’s classification of general-purpose AI models, is leveraging its partnership with Axel Springer, a major European publisher, to demonstrate compliance with these regulations, particularly in proving that its models are not trained on illicitly obtained data.
The collaboration with Axel Springer, known for publications like Politico and Business Insider, is more than just a simple partnership. It’s a strategic move, especially considering OpenAI’s complicated relationship with the EU, highlighted by Germany’s consideration of a ChatGPT ban due to privacy issues. This alliance could be pivotal in helping OpenAI align with the EU’s new AI Act, which demands rigorous transparency from high-risk AI models.
Moreover, this partnership emerges in a competitive landscape, with entities like Grok offering real-time information access and open-source platforms like Mistral AI posing as alternatives. Notably, open-source companies are not subject to the EU’s legislation.
However, this collaboration is not without its challenges. OpenAI’s choice to partner exclusively with Axel Springer raises concerns about potential biases in ChatGPT’s responses, a worry for OpenAI’s safety team. Despite these concerns, the partnership is viewed as a strategic step towards enhancing ChatGPT’s accuracy and reducing misinformation.
Source: Analytics India