Contrast Security, a provider of application security testing, has open sourced an AI policy designed to help organizations manage privacy and security risks when using Generative AI and Large Language Models (LLMs).
The policy addresses several key concerns, like avoiding situations where ownership and intellectual property (IP) rights of software cannot be disputed later on.
It also guards against the creation or use of AI-generated code that may include harmful elements, and prohibits employees from using public AI systems to learn from the organization’s or third-party proprietary data.
Additionally, it prevents unauthorized or underprivileged individuals from accessing sensitive or confidential data.
The policy is open-source and available for anyone to use or adapt. It is designed as a foundation for CISOs, security experts, compliance teams, and risk professionals who are either new to this field or require a readily available policy framework for their organizations.
The sources for this piece include an article in SDTimes.