Site icon Tech Newsday

Netskope releases AI-powered tool to help securely use ChatGPT

Netskope has released a new tool that helps enterprises securely use ChatGPT and other generative AI applications. The tool uses data insights to determine whether employees should be allowed to use ChatGPT, and it can also block inputs that contain sensitive data or code.

Netskope’s new application helps organizations effectively manage and protect their data. It also allows employees to use innovative applications while keeping sensitive data safe, and gives IT teams visibility into ChatGPT usage while allowing them to create custom policies for accessing the application.

It offers data analysis, policy enforcement, and risk coaching to monitor and block sensitive information or code. This ensures that employees can use ChatGPT and similar applications without putting enterprise data at risk, whether they are on-premises or remote. Netskope’s data shows that roughly 10% of enterprise organizations are actively blocking ChatGPT use by teams. However, the company believes that this is a short-term solution, as the demand for ChatGPT and other generative AI applications is growing rapidly.

James Robinson, deputy chief information security officer at Netskope, said that the new tool is “a granular problem” that requires a comprehensive approach. He urged security leaders to “not just say ‘yes’ or ‘no'” to ChatGPT, but to focus on “know” and “understand” the risks involved.

The sources for this piece include an article in TechRepublic.

Exit mobile version