We use cookies and similar technologies to help provide the content on the FACET site and Google Analytics for analytics purposes. You can learn more about cookies and how we use them in our Cookie Policy
AI Research

FACET: Benchmarking fairness of vision models

FACET is a comprehensive benchmark dataset from Meta AI for evaluating the fairness of vision models across classification, detection, instance segmentation, and visual grounding tasks involving people.

Vision models can have biases

FACET helps to measure performance gaps for common use-cases of computer vision models and to answer questions such as:

  • Are models better at classifying people as skateboarders when their perceived gender presentation has more stereotypically male attributes?
  • Are open-vocabulary detection models better at detecting backpackers who are perceived to be younger?

FACET by the Numbers

The FACET dataset is constructed of 32k images from SA-1B, labeled for demographic attributes (e.g perceived age group), additional attributes (e.g hair type), and person-related classes (e.g doctor). We encourage researchers to use FACET to benchmark fairness across vision and multimodal tasks.