Salesforce has unveiled its updated AI acceptable use policy to establish firm boundaries around the utilization of its AI services, with the aim of promoting responsible innovation.
Under the new policy, Salesforce customers are expressly prohibited from employing the company’s AI products or any third-party services connected to Salesforce for activities related to child abuse, deepfakes, predictive profiling of protected categories, or automating decisions with legal consequences, among other specified use cases.
Paula Goldman, Salesforce’s Chief Ethical and Humane Use Officer, emphasized the significance of these policy updates, asserting that they empower customers to deploy Salesforce products with confidence, ensuring an ethical AI experience from product development to deployment.
Salesforce claims that the policy allows it to meet the precautions it provides to its clients, such as ensuring that its products do not cause excessive harm. It also corresponds with third-party expectations when we work with partners like OpenAI, Anthropic, and others, opening new opportunities for market collaboration.
According to Salesforce, the acceptable use policy for artificial intelligence will be critical to the company’s future commercial strategy. The Ethical Use Advisory Council subcommittee, partners, industry experts, and developers were all heavily contacted prior to publishing.
The sources for this piece include an article in CIODIVE.