How to Lie with AI—and Why Public Health Agencies Should Care

How to Lie with AI—and Why Public Health Agencies Should Care

Aug 29, 2024
Image of a face made with data points and various health icons to the right

2024 marks the 60th anniversary of the publication of How to Lie with Statistics, a book that exposed how easy it is to manipulate data, algorithms, and output. The anniversary presents a good opportunity to reflect on the fact that much of today’s advanced analytics, such as machine learning (ML), natural language processing (NLP), and generative artificial intelligence (AI), are based on probability theory, a fundamental part of statistics. Public health researchers use probability theory to understand the relationship between exposure and health effects (for example, disease transmission, vaccine effectiveness, injury prevention, and much more). Advanced analytics hold the potential to improve public health in many ways, but they can also be used in ways that could harm our health. For example, some companies are exploring how to use AI to develop more effective marketing strategies for the tobacco industry.

The misuse of statistics to manipulate people’s behavior is not a new problem. Public health authorities have always needed to be a source of truth for their communities. This year’s Health Datapalooza plenary keynote speech, “Crafting Truth: The Art and Science of Public Health Storytelling in the Age of Misinformation,” highlights the problem and need of sharing accurate public health data with the public. Now, more than ever, it’s critical that public health officials—or anyone looking for real answers—continue to make the following three principles the bedrock of their work:

  1. Evaluate data for equity. Many of us know the adage “garbage in, garbage out,” referring to the fact that bad data leads to inaccurate results. But how many of us have heard “accurate and complete data in, garbage out?” Very few of us, no doubt—and yet, research has demonstrated this scenario more than one might expect. Our data is often based on historical practice. And if our history has been inequitable, then using data based on that history to train machine learning models and AI means we will perpetuate inequitable answers. For example, even when granted access to every record of a patient’s history, which would often be considered “high-quality complete data,” many barriers remain for some seeking care, including access, transportation, and loss of wages. Therefore, the data could be a complete misrepresentation of the care a person truly needs. The underreporting of race and ethnicity values, underdiagnosis in historically marginalized populations, as well as the structured incentives to diagnose one disease over another are well known and only scratch the surface of biases lurking in the data. Therefore, it is imperative to review data before using it to understand how well it represents the population of interest and to consider what underlying factors were used to create it.
  2. Ensure methodological rigor. The surge in ML and AI built into applications runs the risk that people might use these analytic features without understanding the underlying mathematics behind them, how they account for error, or how they adjust for bias. There have already been a number of studies that describe inherent biases found in ML or AI algorithms. Implementing flawed technology can have devastating effects on public health initiatives, as in the case of AI chatbots that promoted eating disorders and poorly trained algorithms misclassifying patients’ COVID risks. As a result, public health now faces the dual challenge of combating both the problem (such as eating disorders and COVID) and the perception that tools and results are correct because they are supported by advanced analytics. For public health officials to combat these challenges, they must use these same technologies and advanced analytic tools coupled with methodological rigor to build trust and shape public behavior. Methodological rigor requires sound study design before results are generated, understanding the algorithms and their limitations, and keeping lived-experience experts in the loop to review and give context throughout analysis.
  3. Report with humility and center community voices. Even when we have reviewed data for equity and applied the most rigorous methodologies, we must still accept that there are limitations to what can be seen and understood with quantitative data alone. As discussed, ML and AI are built on probability theory. They’re designed to find the most probable answer—which is not always the right answer or the complete answer. Lived experiences cannot be discounted or overshadowed and can add the needed and often missing context for sense-making. Researchers or anyone truly working to find real solutions to complex problems must find ways to incorporate the perspective offered by those with lived experiences into the design, execution of, and conclusions drawn from their work. For example, Mathematica has established a Lived Experience Expert Panel (LEEP) that collaborates with our teams as we design and execute work for our clients. The LEEP is composed of people from different demographic backgrounds, abilities, genders, races, Tribes, and sexual identities, representing various professional groups and members of the public health community, and with diverse lived experiences. We collaborate with the LEEP to inform how we approach our work, and ensure that we keep community voices and the realities of the people who are most impacted at the center of the work we do.

For those inclined to misuse AI, it is easy to see that using biased data or shopping around for algorithms will give them answers they want. That’s how to lie with AI. Our goal, however, is not to teach anyone how to misuse AI, but rather remind public health agencies how easy it is, intentionally or unintentionally, to generate erroneous results that seem to be true. Communities are dependent on public health agencies to not only continue to maintain scientific rigor in everything they do, but to proactively combat a likely onslaught of seemingly sound AI-generated results that run counter to public health goals.

About the Author