cover image: Managing the risks of inevitably biased visual artificial intelligence systems

20.500.12592/wg0csb

Managing the risks of inevitably biased visual artificial intelligence systems

26 Sep 2022

Scientists have long been developing machines that attempt to imitate the human brain. Just as humans are exposed to systemic injustices, machines learn human-like stereotypes and cultural norms from sociocultural data, acquiring biases and associations in the process. Our research shows that bias is not only reflected in the patterns of language, but also in the image datasets used to train computer vision models. As a result, widely used computer vision models such as iGPT and DALL-E 2 generate new explicit and implicit characterizations and stereotypes that perpetuate existing biases about social groups, which further shape human cognition.
emerging markets u.s. politics & government technology & innovation

Authors

Aylin Caliskan, Ryan Steed

Published in
United States of America

Related Topics

All