Be Aware of Potential Bias with AI - American Society of Employers - Susan Chance

Be Aware of Potential Bias with AI

AI (Artificial Intelligence)Artificial Intelligence (AI) is being used more often and for a variety of purposes. Michigan State University received a $1.7 million grant to use AI in finding new drugs for treating diseases. Cyber security uses AI in various forms such as facial recognition to “verify a person’s real-world identity.” These seem to be good uses for AI.

However, the use of AI can be very controversial. For example, AI can be biased. A recent study conducted by the following institutions showed that robots can become racist and sexist:

  • Georgia Institute of Technology
  • University of Washington
  • Johns Hopkins University
  • Technical University of Munich

During their study robots were tasked with scanning blocks with people’s faces on them and to put the “criminals” in a box. Block’s which had the faces of black men on them were chosen by the robots over and over again. When given directions with words such as homemaker the robots chose blocks with pictures of women.  

A popular AI algorithm was used, which goes to show that bias can, as in this case, be built into the programming causing the robots to make biased choices.

The White House has released a Blueprint for an AI Bill of Rights. This is meant to introduce regulations on algorithms used in AI in both public and private sectors.

This “Blueprint” contains five principles intended to “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”

1 - Safe and Effective Systems Principle

This addresses protecting people from “unsafe or ineffective AI systems” and “encourages development involving ‘diverse communities, stakeholders, and domain experts’ to identify risks.” It also addresses testing and identifying risks prior to deploying AI as well as on-going monitoring of the AI systems.

2 - Algorithmic Discrimination Protections Principle

This principle encourages AI system developers to use proactive and continuous measures to guard against “algorithmic discrimination.”

3 - Data Privacy Principle

The data privacy principle addresses the need for data privacy protections to be built in and also encourages consent for collecting and using personal data.

4 - Notice and Explanation Principle

This principle addresses the need for AI system developers to “provide descriptions of their AI system’s functionality in plain language to explain that AI decision making is in use, the role AI plays.”

5 - Human Alternatives, Consideration, and Fallback Principle

This principle recommends that AI systems allow subjects to opt out of automated decision making where appropriate, granting them the alternative of a human decision maker.

While these principles are not law, any employer using or considering using AI should take these principles under advisement and do all they can to ensure any AI systems they employ are not inherently biased.

 

Sources: washingtonpost.com, akingump.com, humanmedicine.msu.edu, msutoday.msu.edu, futurelearn.com, sl.acm.org, whitehouse.gov

 

Please login or register to post comments.

Filter:

Filter by Authors

Position your organization to THRIVE.

Become a Member Today