November 15, 2024

The push for explainable AI

Author: Ray Schroeder
Go to Source

 

BY DEREK B. JOHNSONJUL, GNC

While organizations are ultimately legally responsible for the ways their products, including algorithms, behave, many encounter what is known as the “black box” problem: situations where the decisions made by a machine learning algorithm become more opaque to human managers over time as it takes in more data and makes increasingly complex inferences. The challenge has led experts to champion “explainability” as a key factor for regulators to assess the ethical and legal use of algorithms, essentially being able to demonstrate that an organization has insight into what information its algorithm is using to arrive at the conclusions it spits out. The Algorithmic Accountability Act would give the Federal Trade Commission two years to develop regulations requiring large companies to conduct automated decision system impact assessments of their algorithms and treat discrimination resulting from those decisions as “unfair or deceptive acts and practices,” opening those firms up to civil lawsuits.

https://gcn.com/articles/2019/07/03/explainable-ai.aspx

Share on Facebook

Read more