Site icon Tech Newsday

Bard faces criticism over ethical concerns

Google’s AI chatbot, Bard, is facing criticism from its employees for refusing to adhere to ethical recommendations, according to reports.

The employees accuse the chatbot of providing erroneous and potentially dangerous information, while the company’s management has been accused of ignoring safety and ethical concerns in a build-up to Bard’s launch in 2023.

Despite earlier promises by Google to triple its workforce investigating artificial intelligence ethics and committing more resources to analyzing potential hazards, some current and former employees have claimed that the team working on AI ethics has become disempowered and demoralized. Some staff have even been advised not to obstruct the development of generative AI tools.

Meredith Whittaker, president of the Signal Foundation and a former Google manager, warned that “if ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

Despite Google’s claim that responsible AI is a primary focus for the company, the team responsible for it has allegedly lost three members in a wave of layoffs in January, including the head of governance and projects. Leaving some employees with the fear that the rapid pace of expansion may not provide them with enough time to explore potential concerns.

Google has promised to embrace AI ethics, but some employees continue to face manager discouragement when working on machine learning equity.

The sources for this piece include an article in Bloomberg.

Exit mobile version