PROFESSOR SARA GERKE CO-AUTHORS A NEW PIECE IN SCIENCE

ScienceJuly 2021 — Professor Sara Gerke co-authors a new piece in Science on the drawbacks of explainable artificial intelligence (AI) algorithms in health care. Her co-authors include scholars from Harvard Law School, INSEAD, and the University of Toronto. 

Artificial intelligence (AI), especially of the machine learning (ML) variety, is increasingly being used in health care. Some forms of AI/ML, employing so-called “black box” algorithms, have worried patients, physicians, and policymakers. A near consensus is emerging in favor of explainable artificial intelligence among academics, governments, and civil society groups. But, as Professor Gerke and her co-authors of this new commentary argue, requiring explainable AI/ML may not accomplish the goals set out for it in health care and may have some worrying effects.

Key takeaways are below: 

  • Explainable AI/ML (unlike interpretable AI/ML) systems do not offer the actual reasons behind black-box predictions. Rather, they offer after-the-fact rationales for black-box functions.
  • These systems can lack robustness, and do not necessarily support trust and accountability. 
  • Explanatory AI/ML also come with additional costs — such explanations may mislead users, offering a false sense of understanding and confidence.
  • From a regulatory perspective, requiring explainability for health care AI/ML may limit innovation — in some instances, limiting oneself to algorithms that can be explained sufficiently well may undermine accuracy.
  • Instead of focusing on AI/ML explainability, regulators like the U.S. Food and Drug Administration (FDA) should closely scrutinize the aspects of health AI/ML that affect patients — for example, safety and efficacy and consider more emphasis on clinical trials.
  • These arguments about the drawbacks of AI/ML explainability may apply more broadly to AI/ML in other domains, including those outside of health care.

To read the full piece, click here.


Professor Sara Gerke is an Assistant Professor of Law at Penn State Dickinson Law. Her research focuses on the ethical and legal challenges of artificial intelligence and big data for health care and health law in the United States and Europe. Before joining Penn State Dickinson Law, Professor Gerke served as a Research Fellow in Medicine, Artificial Intelligence, and Law at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School for the Project on Precision Medicine, Artificial Intelligence, and the Law (PMAIL).