Ethics and Artificial Intelligence in Health Care: The Pivot Point

Ethics and Artificial Intelligence in Health Care: The Pivot Point

Feb 10, 2020
John Stern
Ethics graphic

Doctors and researchers using innovative technologies for medicine and healthcare: medical wearables, AI, 3D printed and digital organs, stem cells and DNA bank

Photo By: elenab

In November 2019, Google and Ascension reminded the public that giants walk the Earth when they clarified the nature of their business agreement, which was first announced in July 2019. To wit, Google and Ascension revealed that the health care data of up to 50 million Americans would be transferred from Ascension to Google’s Healthcare Cloud, where artificial intelligence (AI) and machine learning can use the data to develop predictive algorithms.

As the project director overseeing Mathematica’s participation in the AI Health Outcomes Challenge sponsored by the Center for Medicare & Medicaid Innovation, I felt the jolt of this announcement—we are building our AI solution by leveraging that very same AI-enabled Google Healthcare Cloud. We expected the swift media reaction that followed, but we were surprised and more than a little worried initially when the U.S. Department of Health and Human Services (DHHS) Office for Civil Rights director declared just two days after the announcement from Google and Ascension that DHHS “would like to learn more information about this mass collection of individuals’ medical records with respect to the implications for patient privacy under HIPAA.”

As we talked, however, we realized that this might be a great pivot point for the industry, a crucial conversation that we need to have. Even though the technology is already functioning and rapidly being adopted more broadly and in more critical life-or-death health care applications, those of us involved in health care research and data stewardship need to address the ethical standards guiding the use of AI in health care settings.

Although time will tell which improvements this technology can bring, and at what cost, it is past time to tackle the bigger ethical considerations that loom large over the future of the industry. What rights do individuals have to withhold or withdraw consent regarding the use of their data in massively aggregated data sets for developing AI predictors of health outcomes? What are individuals’ privacy rights, and, for that matter, how is privacy defined within the context of AI and algorithm generation? What rights do companies have to develop algorithms using this data? Who profits and how? Who is given access to the prediction, and what do they get to do with it? Who is liable when the prediction is wrong—the doctor, the patient, the data provider, or the algorithm provider? Who is liable when the prediction is ignored? How do we prevent potentially negative consequences for disproportionately underserved, underrepresented, vulnerable, or so-called risky populations? And is there really any difference between a machine-learning predictive algorithm and established statistical models, and what does that suggest with regards to ethical use considerations?

Answering these questions will be critical to adopting AI in health care, and lessons from the not-so-distant past can help guide the way. For example, consider the digital wearables market and the impact that settling these ethical questions just a few years ago has had on the industry.

The chances are very, very high that today you or somebody you know has some sort of digital wearable, like an Apple or Garmin watch. In fact, these devices have become so widespread that the technology is now ubiquitous. But this was not the case seven years ago. As the digital wearables market progressed, the dialogue reached such a contentious level that in 2014, the Food and Drug Administration issued guidance on wearable-generated data privacy rights, device accuracy and reliability, and other related topics. States grappled with these issues as well. When I served in the state of Vermont’s Agency of Health & Human Services, the governor and chief information officer convened a statewide Healthcare Data Governance Committee, and one of the topics of concern was the ethical framework for using wearable-generated health data. Other states like Nevada, New York, and Colorado took similar or more aggressive actions, even going so far as to support incubators to get these technologies into the marketplace. The dialogue at all levels helped establish the ground rules, which positioned the wearables market to expand further into health care. In fact, there are now at least 30 well-established wearable device manufacturers, such as Apple, Garmin, Nike, and Suunto, whose devices track the wearer’s heart rate and other physical and health data points, and then store the data in the cloud. There are also new start-ups. In June 2018, the University of Colorado Hospital in Denver announced plans to “create an innovation center and to work with medical technology start-up companies on artificial intelligence, big data, decision support, virtual health, and wearables, among other technologies.” With new and improving sensors, the use of wearables will continue to expand, but only because a general ethical framework covering privacy, profit, liability, and use has been established and serves as a basis upon which an industry can grow in a way that the public accepts.

Tackling the issue of ethical applications of AI in health care will require a national conversation with input from a broad cross section of stakeholders. This idea is gaining traction. For example, Vermont recently released a report on AI adoption by their legislated AI task force investigation. The task force recommends “the adoption of a Code of Ethics for AI development and use in Vermont,” which the task force indicates is to be based on the European Code of Ethics.

In the meantime, organizations like Mathematica can take small steps along the way. As part of our submission for the AI Health Outcomes Challenge, we’ve made a conscious decision to emphasize the importance of trust. We have teamed with the Patient Advocate Foundation and The Health Collaborative and are conducting human-centered design analyses to determine the best way to help users of our AI-based algorithm understand and adopt it. We know that trust is one of the biggest issues facing this transformationally powerful technology, as well as its potential to benefit all actors in the health care space—payers, providers, and patients; but this is a challenge we can address. Regardless of the methodologies we develop to address questions of trust, without a nationally agreed-upon ethical framework defining the foundational principles for using AI in health care, growing AI applications—and achieving the benefits of AI in health care—will be haphazard at best.

Now, with DHHS’s involvement, we might be closer than ever to achieving a national consensus on the answers to these questions that is inclusive of different perspectives and helps propel the industry and quality of health care in the United States forward. The time is right for a broadscale public discourse on the topic, and the need for an ethical foundation for this technology’s use demands it.

About the Author