More work needed to blunt public’s AI privacy concerns: Report

Share post:

Organizations aren’t making much progress in convincing the public their data is being used responsibly in artificial intelligence applications, a new survey suggests.

The report, Cisco Systems’ seventh annual data privacy benchmark study, was released Thursday in conjunction with Data Privacy Week.

It includes responses from 2,600 security and privacy professionals in Australia, Brazil, China, France, Germany, India, Italy, Japan, Mexico, Spain, United Kingdom, and the United States. The survey was conducted in the summer of 2023.

Among the findings, 91 per cent of respondents agreed they need to do more to reassure customers that their data was being used only for intended and legitimate purposes in AI.

“This is similar to last year’s levels,” Cisco said in a news release accompanying the report, “suggesting not much process has been achieved.”

Most respondents said their organizations were limiting the use of generative AI (GenAI) over data privacy and security issues. Twenty-seven per cent said their firm had banned its use, at least temporarily.

Customers increasingly want to buy from organizations they can trust with their data, the report says, with 94 percent of respondents agreeing their customers would not buy from them if they did not adequately protect customer data.

Many of the survey responses show organizations recognize privacy is a critical enabler of customer trust. Eighty per cent of respondents said their organizations were getting significant benefits in loyalty and trust from their privacy investment. That’s up from 75 per cent in the 2022 survey and 71 per cent from the 2021 survey.

Graphic from Cisco Systems 2024 Privacy Benchmark report

Nearly all (98 per cent) of this year’s respondents said they report one or more privacy metrics to the board, and over half are reporting three or more. Many of the top privacy metrics tie very closely to issues of customer trust, says the report, including audit results (44 per cent), data breaches (43 per cent), data subject requests (31 per cent), and incident response (29 per cent).

However, only 17 per cent said they report progress to their boards on meeting an industry-standard privacy maturity model, and only 27 per cent report any privacy gaps that were found.

Respondents in this year’s report estimated the financial benefits of privacy remain higher than when Cisco started tracking them four years ago, but with a notable difference. On average, they estimated benefits in 2023 of US$2.9 million. This is lower than last year’s peak of US$3.4 million, with similar reductions in large and small organizations.

“The causes of this are unclear,” says the report, “since most of the other financial-oriented metrics, such as respondents saying privacy benefits exceed costs, respondents getting significant financial benefits from privacy investment, and ROI (return on investment) calculations, all point to more positive economics. We will continue to track
this in future research to identify if this is an aberration or a longer-term trend.”

One challenge facing organizations when it comes to building trust with data is that their
priorities may differ somewhat from those of their customers, says the report. Consumers surveyed said their top privacy priorities are getting clear information on exactly how their data is being used (37 per cent), and not having their data sold for marketing purposes (24 per cent). Privacy pros said their top priorities are complying with privacy laws (25 per cent) and avoiding data breaches (23 per cent).

“While these are all important objectives [for firms], it does suggest additional attention on transparency would be helpful to customers — especially with AI applications where it may be difficult to understand how the AI algorithms make their decisions,” says the report.

The report recommends organizations:

— be more transparent in how they apply, manage, and use personal data, because this will go a long way towards building and maintaining customer trust;
— establish protections, such as AI ethics management programs, involving humans in the
process, and work to remove any biases in the algorithms, when using AI for automated
decision-making involving customer data;
— apply appropriate control mechanisms and educate employees on the risks associated with generative AI applications;
— continue investing in privacy to realize the significant business and economic benefits.

The post More work needed to blunt public’s AI privacy concerns: Report first appeared on IT World Canada.
Howard Solomon
Howard Solomonhttps://www.itworldcanada.com
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times.

Featured Tech Jobs

SUBSCRIBE NOW

Related articles

Canadian police need a search warrant to get your IP address: Supreme Court

An IP address is the key to unlocking a user's internet identity the court's majority

Robot startup uses ChatGPT to enhance its communications and reasoning skills

Humanoid robot startup Figure has secured a significant $675 million investment from a group of high-profile investors, including...

Lawsuit requires Pegasus spyware to provide code used to spy on WhatsApp users

NSO Group, the developer behind the sophisticated Pegasus spyware, has been ordered by a US court to provide...

Pornhub operator broke Canadian privacy law, watchdog rules

Site trusted word of boyfriend that woman agreed to have intimate images of her uploaded. It agreed after her complaint to delete the images, but they continued

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways