According to a Reuters/Ipsos study, workers throughout the United States are implementing ChatGPT into their everyday routines, relying on its capabilities for a variety of jobs. This trend continues despite the worries expressed by industry titans such as Microsoft and Google, who have taken steps to limit the usage of the AI-powered chatbot.
The survey says 28% of respondents use ChatGPT at work on a regular basis, and 22% verified their employers’ express support for external technologies. However, some organizations, such as Samsung Electronics, have prohibited the usage of ChatGPT owing to data security concerns after discovering an employee had uploaded important code to the site. Google has also warned staff not to use chatbots like Bard because they might offer unwanted programming recommendations.
Beyond security concerns, human reviewers are also said to have the ability to access generated conversations, and studies indicate that AI systems might replicate absorbed data, potentially jeopardizing sensitive proprietary information. The lack of user understanding regarding data usage in generative AI services further compounds these concerns.
Ben King, VP of Customer Trust at corporate security firm Okta, emphasizes the complexity of the situation. He highlights that the absence of contracts for many AI services poses a challenge for businesses, as these services are often offered for free. Consequently, traditional assessment processes do not apply, raising uncertainties about data security.
The sources for this piece include an article in Reuters.