In a move to protect confidential data and ensure the security of its internal systems, Apple has issued a company-wide memo prohibiting the use of generative AI tools, including OpenAI’s ChatGPT and Microsoft-owned GitHub’s Copilot. The internal memo, obtained by The Wall Street Journal, highlights Apple’s concerns over these AI platforms potentially collecting sensitive information from its employees.
The ban on ChatGPT and Copilot aims to prevent the leakage of proprietary code and other confidential information. By automating the code-writing process, developers using Copilot could inadvertently expose Apple’s intellectual property or risk unauthorized access to their work. Similarly, ChatGPT’s capability to compose emails raises concerns about the potential disclosure of classified information.
Apple is not alone in imposing restrictions on the use of generative AIs within its workforce. JPMorgan Chase and Verizon have also prohibited the use of such platforms. Meanwhile, Amazon has taken a different approach by urging its engineers to rely on its own internal AI tool rather than third-party alternatives, according to sources familiar with the matter cited by the WSJ. It is worth noting that Apple has reportedly been developing its own AI model, suggesting a desire for more control over its AI capabilities.
During a recent investor call, Apple CEO Tim Cook acknowledged the potential of generative AI but emphasized the need to address existing issues with the technology. Cook stated that certain aspects of the current AI landscape require careful consideration and resolution.
The sources for this piece include an article in 9to5Mac.