White Cube / Black Box
White Cube / Black Box is a collaboration between artists, designers, curators, and data scientists at the University of Michigan Museum of Art (UMMA), the Michigan Institute for Data Science (MIDAS), and the Stamps School of Art and Design that attempts to shed light on the opaque decision-making processes within museum collecting practices and machine learning algorithms.

White Cube / Black Box seeks to identify bias and the many ways bias gets introduced into and amplified within systems. In art, the phrase “White Cube” references the history of exclusionary practices within museums and galleries. Using sterile white walls and decontextualized spaces, works of art are divorced from the outside world making them less approachable and accessible. In technology, the “Black Box” is a controversial metaphor used to describe automated systems where the decision-making process is very difficult or even impossible to understand.

The resulting art installation featured some of the interesting, curious, and troubling findings that our research has uncovered about both facial-recognition technology and about the history of representation in UMMA’s collection of approximately 24,000 works.

We applied one of the most widely-used facial detection algorithms to UMMA’s art collection. After detecting faces in UMMA’s artworks, we used a race classification algorithm to look at the collection’s diversity. We used the FairFace Dataset for examples of faces belonging to different races. We used these results to characterize and visualize the racial diversity of the acquisitions made under all of UMMA’s directors.

We used a technique called “eigenfaces” to explore variation within faces found in UMMA’s collection and to understand which features are most important in detecting a face. Using the eigenfaces, the computer identified a painting of a clown, with makeup caricaturing a face, as having the most representative face in UMMA’s collection.

By applying facial detection algorithms to UMMA’s art collection, we visualize bias in the museum’s collecting practices throughout its 150-year history. We can also see the ways algorithms amplify human bias. Our research makes more transparent the opaque decision-making processes within museum collection practices and machine learning algorithms as these rapidly evolving technologies are being deployed across the world.