Meta unveils FACET tool to combat computer vision biases

Share post:

Meta has launched FACET (Fairness in Computer Vision Evaluation Tool), a tool to identify and rectify racial and gender biases inherent in computer vision systems.

The tool is based on Meta’s extensive dataset of over 30,000 images featuring 50,000 individuals, meticulously tagged by experts across diverse categories. This allows researchers to evaluate computer vision models’ performance concerning various characteristics, including perceived gender and skin tone.

FACET opens the door to crucial questions about the fairness of AI systems. For example, does an AI system excel at identifying male skateboarders? Does it perform better in recognizing individuals with specific skin tones? Moreover, does the system’s efficiency vary when faced with individuals possessing curly or straight hair?

In addition to launching FACET, Meta is also re-licensing its DINOv2 computer vision model under the Apache 2 open-source license, enabling commercial use. This move is a further commitment by Meta to fairness in AI, as it will allow other developers to build on DINOv2 and create more equitable AI systems.

Chloe Bakalar, Meta’s chief ethicist, emphasized the company’s dedication to advancing AI systems responsibly, especially concerning historically marginalized communities.

The sources for this piece include an article in Axios.

SUBSCRIBE NOW

Related articles

Research Raises Concerns Over AI Impact on Code Quality

Recent findings from GitClear, a developer analytics firm, indicate that the increasing reliance on AI assistance in software...

Microsoft to train 100,000 Indian developers in AI

Microsoft has launched an ambitious program called "AI Odyssey" to train 100,000 Indian developers in artificial intelligence by...

NIST issues cybersecurity guide for AI developers

Paper identifies the types of cyberattacks that can manipulate the behavior of artificial intelligen

Canada, U.S. sign international guidelines for safe AI development

Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems. It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways