At Voxel51, we share many of the same motivations as the community behind DataEthics4All. Our belief is that the machine learning world has increased in scale and speed to a point where it is becoming difficult for data scientists to really get to know their data. This is especially true in fields like computer vision when dealing with high-dimensional visual data like images and videos. Problematic biases can easily stay hidden if datasets are collected arbitrarily and model performance is not properly analyzed.
We produce an open-source tool, FiftyOne, designed to help machine learning scientists and engineers discover biases and debug their datasets and models. We – Jason Corso, Co-founder, and CEO, and Eric Hofesmann, the Engineer, had the opportunity to join the Ethics4NextGen AI event and host a talk and Bootcamp centered around open-source machine learning tools and how to use FiftyOne to understand and improve your visual datasets and models.
Jason gave a presentation about the criticality of understanding your data in modern machine learning and the subsequent need to use open-source tools in machine learning work. During the talk, we presented a thorough picture of the machine learning workflow and the open-source tools, including FiftyOne, that provide compelling end-to-end solutions for the machine learning practitioner. We also presented data from recent case studies that use FiftyOne to provide a mechanism for visually inspecting bias and related components in your visual data sets in the context of smart policing, which was a major theme of the Summit.
During the Bootcamp leading up to the Ethics4NextGen hackathon, we presented a deep dive into using FiftyOne to explore dataset and model predictions. The goal was to show how FiftyOne can be used to help develop new methods in the key areas of predictive policing, criminal justice, and COVID-19 contact tracing which were the focus of the hackathon.
To start off the Bootcamp, we provided an overview of FiftyOne and how it fits into the machine learning lifecycle. As we see it, developing an AI system occurs in a loop involving data collection and annotation, model construction and training, evaluation and analysis, leading to further dataset refinement and model updates. FiftyOne fits snugly into various pieces of this cycle, helping users discover the best data to annotate, visualize their data and annotations, find mistakes in their dataset, compare model predictions, find failure modes of their model, and more.
In this Bootcamp, we demonstrated FiftyOne on two tasks, pedestrian detection, and facial recognition. We analyzed the CityPersons dataset with a pedestrian detection model and found ways of improving the performance of the model. Additionally, we used FiftyOne to analyze the Labeled Faces in the Wild facial recognition dataset and found a severe bias with regards to race and gender, the distribution of available faces was heavily weighted towards white men as shown in the figure below. FiftyOne could be used to balance this dataset to avoid training a biased model on it. If you are interested in this Bootcamp, a recording is available in the DataEthics Institute at DataEthics4All.org.
Figure 1: Gender bias of facial recognition dataset Labeled Faces in the Wild visualized in FiftyOne
The Ethics4NextGen AI event was a great opportunity to interact with other people who share our views; a concerted effort needs to be made to ensure that machine learning models and datasets are vetted for biases that could lead to unethical and even dangerous outcomes. Our contribution to this goal is through FiftyOne, which helps data scientists get close to their data and uncover failure modes in their datasets and models that could lead to biased AI applications.