ACF's Lauren Supplee on Boosting the Use of Social Policy Research

ACF's Lauren Supplee on Boosting the Use of Social Policy Research

Dec 04, 2024
Emily Moiduddin and Lauren Supplee

When the White House announced its Blueprint for the Use of Social and Behavioral Science to Advance Evidence-Based Policymaking in May, it singled out the federal Head Start program as a model of how social and behavioral science can spur a cycle of continuous improvement for a government program that benefits children and families.

“It was really exciting to see it called out,” recalled Lauren Supplee in the latest episode of Mathematica’s On the Evidence podcast. Supplee is the deputy assistant secretary for planning, research, and evaluation at the U.S. Department of Health and Human Services’ Administration for Children and Families (ACF), where she leads ACF’s Office of Planning, Research, and Evaluation (OPRE). OPRE, along with partners in the Division of Child and Family Development and the Office of Head Start, has played a leading role in generating the body of knowledge that has strengthened the Head Start and Early Head Start programs over many decades.

For the episode, Mathematica’s Emily Moiduddin, an expert on early childhood programs who has conducted research about Head Start for OPRE, joined On the Evidence host J.B. Wogan to talk with Supplee. They discussed the White House blueprint on behavioral and social science, the role of research in informing improvements to Head Start over time, how evidence in social policy has changed since OPRE was established almost 30 years ago, how the Foundations for Evidence-Based Policymaking Act is changing the use of evidence in the federal government, common barriers to using social and behavioral science in federal decision making and practices, and insights from Supplee’s blog series on boosting use of research evidence.

“It doesn't matter whether we're talking about social policy, international relations, climate science, or education, the findings have been very consistent about what it takes to get research used,” Supplee said. The use of research boils down to three components: relationships, relevance, and routines, she explained. “It's shared in the context of a trusted relationship … the work is timely and topically relevant to the decision being made,” and the research is “embedded in the existing routines [of] how people do their work.”

Watch the recorded interview below.

View transcript

[LAUREN SUPPLEE]

It doesn't matter whether we’re talking about social policy, international relations, climate science, or education, the findings have been very consistent about what it takes to get research used, which is it’s shared in the context of a trusted representative -- it’s like a theme on this podcast, and it could be with an individual or a trusted organization; that the work is relevant, timely and topically relevant to the decisions being made – again, something else that I’ve talked about a lot today; and then routines, embedded in the existing routines. I think often researchers would step in and say, “I’ve created a tool for you to use research.” But then, if it’s not how someone did their job, it’s unlikely to be used – so really stopping and better understanding how people do their work and then how could we integrate science into that.

[J.B. WOGAN]

I’m J.B. Wogan from Mathematica, and welcome back to On the Evidence. For this episode, I’m joined by Mathematica’s Emily Moiduddin who is an expert on early care and education, including Head Start and Early Head Start, which are three federally-funded programs designed to promote school-readiness for infants, toddlers, and preschoolers. Welcome, Emily.

[EMILY MOIDUDDIN]

Thanks, J.B. I’m glad to be here.

[J.B. WOGAN]

So, Emily, we have a very special guest who we interviewed together, Lauren Supplee. What should listeners know about Lauren?

[EMILY MOIDUDDIN]

Well, J.B., Lauren heads the Office of Planning, Research, and Evaluation at the Administration for Children and Families, also known as known as ACF or OPRE within ACF, which is part of the U.S. Department of Health and Human Services. She has a few different job titles. She’s a deputy assistant secretary for Planning, Research and Evaluation at ACF. She’s also the chief evaluation officer and scientific integrity officer at ACF. So basically, she’s one of the leading voices within the Federal Government on the use of evidence in social policy. She’s spent a good part of her career focused on the area most near and dear to my heart, early childhood.

[J.B. WOGAN]

Well, wow, that sounds like a really important job and right up our alley here at On the Evidence and Mathematica. I know we’re going to talk today about a report isoniazid by the White House in May of this year called, “The Blueprint for the Use of Social and Behavioral Science to Advance Evidence-Based Policy Making.” Okay, I wouldn't put it past our listeners to have already read the report cover to cover; but for anyone who missed it, I thought we’d provide a little context about what the Blueprint is and how it relates to the work Lauren oversees. Sound good?

[EMILY MOIDUDDIN]

Yep, sounds good.

[J.B. WOGAN]

Okay, so to quote the report, “The Blueprint is a whole of Government effort that aims to provide a resource to assist federal decision-makers in leveraging social and behavioral science to improve policy and program design and delivery.” The report acknowledges the Federal Government has a long history of using social and behavioral science to inform policies and programs; but it’s not the norm; it’s not as consistent as it could be; and the report is making the case that if evidence was more enmeshed in policymaking and program design, the Government would be more effective in serving the American people.“ Then, it’s providing some recommendations on how to make that happen.

Emily, is there anything else that you would want to highlight for people about the Blueprint?

[EMILY MOIDUDDIN]

Yes, definitely.

So I would note that the White House also published a blog to announce the Blueprint, and that blog mentions the long history of social and behavioral science informing the creation and evolution of Head Start. It was pretty incredible; it was given as the example of how social and behavioral science – or a cycle of continuous improvement for a program that benefits children and families. It’s Lauren’s office --the Office of Planning, Research and Evaluation – and within that office, a lot of partners in the Division of Child and Family Development that’s overseeing a lot of this research and reviewing in partnership with the Office of Head Start in order to use it to strengthen the Head Start and Early Head Start programs today.

It means a lot to me that Mathematica has been fortunate to partner with the Office of Planning, Research, and Evaluation on a number of projects over time related to Head Start, including long-running annual surveys that help shape those programs. So I think it was really exciting for folks at the Office of Planning, Research, and Evaluation and definitely for my colleagues and me at Mathematica to see the work highlighted in the White House materials in that way in the blog itself, but also diving deeper into the Blueprint really a lot of examples coming out of OPRE that are relevant to using this strategy to bring data to policy.

[J.B. WOGAN]

Okay, great, I think that sets the stage well for our conversation with Lauren.

I’ll just say my usual preamble -- that if you’re listening to this podcast for the first time, please take a moment to subscribe. We’re on YouTube, Apple Podcasts, Spotify, and elsewhere. If you like this episode, consider leaving us a rating and review because it helps others find our show.

With that, here’s our interview with Lauren Supplee.

[EMILY MOIDUDDIN]

Well, so thank you, Lauren, again for joining us -- really excited for the conversation. Our first question is about the recent White House blog announcing the Blueprint in May that used Head Start and research about Head Start as an illustration of social and behavioral science informing federal policies to improve outcomes. I was certainly excited to see it given a long-time Head Start researcher. So I’d like to start by asking what was your reaction when you learned that this was being featured in the White House blog?

 

[LAUREN SUPPLEE]

Yeah, well obviously I was also very pleased. I actually got it through a colleague who forwarded it, and then immediately began to share it with other colleagues; and we started planning OPRE writing its own blog about it. As you know, one of my key priorities in this role is a more intentional focus on evidence use to meet our mission of improving the efficiency and effectiveness of ACF programs and Head Start. Then obviously in the longer report, they highlight a lot of our work, a lot of it around TANF, our two longstanding portfolios in OPRE. They really have, over the years, tried many different strategies to build a solid foundation of relevant but rigorous evidence that could be leveraged to meet our mission. So it was really exciting to see it called out.

 

I also wanted to just note the longer document really highlights the breadth of the work – so with Head Start recruitment engagement work, the National Survey of Early Care and Education, which has been really key for us in answering just-in-time policy questions; the Pathways to Work Clearinghouse on employment and training for low-income individuals that synthesizes the research across that portfolio; some of our work testing innovative approaches to barriers to employment; understanding human services in rural context; and home visiting and diaper demonstration and sexual risk avoidance programs. All of these are really part of our broader portfolio and highlight the breadth of ACF programs and why our research and evaluation and data work is so important to meeting our mission.

 

[J.B. WOGAN]

Lauren, I wanted to ask in the blog post announcing the Blueprint, they mentioned that the Head Start program is an example of – I don't know if they actually say, “cycle of continuous improvement,” but they talk about the cycle of improving Head Start over time using social and behavioral science. I was curious to hear more about some of the examples, some of the ways that evidence has informed not only the creation but then the expansion and evolution of Head Start over time.

 

[LAUREN SUPPLEE]

Yeah, well as you noted, the White House blog really highlighted that deep and rich history in Head Start of building evidence and then using evidence to inform program and policy. As you all know, we work very closely at OPRE with our programs partners, including Head Start, to think about what are the questions they have; how can we do research in response to that; how do we then interpret findings and understand collaboratively how they might apply to open policy questions. It really is a partnership-based portfolio.

So in that vein, I thought it would be helpful to share some specific stories from colleagues in the Office of Head Start that really, I think, bring to light this particular -- your question. I often write in my own blog series about my frustration of when people say, “Research just isn’t used in policy,” and that’s because they’re taking a very narrow view of research use. They’re only thinking about one type, which is in the research-on-research use field called “instrumental use.” So one piece of research answers a specific question; but that’s exceedingly rare. It’s not typical that any single piece of research is perfectly aligned with the complex issues that come up in policy context.

If people have that notion and that’s the only way they think research is used, of course they’re going to think it’s not being used. Sometimes we’ve seen some decision-makers think if a piece of research doesn't exist, maybe they stop their use of research. But instead what I am more commonly seeing is more subtle, kind of harder to spot; and that’s often seen as things like conceptual use, where a body of research changes how someone thinks about a topic or a problem.

So in that vein of conceptual use, a great example was when Head Start was trying to grapple with whether and how to withstand program service duration. This was several years ago. The Head Start staff wanted to know what are the right number of hours to require, but no research answered that exact question. So instead, they worked with OPRE; they used a body of research. So there were new studies specific to Head Start, like the Head Start Impact Study; but they also then looked at research from pre-K or half-day kindergarten. There was research examining what it took to effectively implement early childhood curriculum and teacher practice and showed that teachers really need a solid amount of time on each aspect to engage kids in a thoughtful and intentional way in science or math or reading, and that just wasn’t possible in a four-hour day.

There was research about the loss of learning over the summer. All of this together built the case that a three-and-a-half hour program wasn’t enough duration for Head Start kids. So to that end, there was a body of research that really told Head Start and informed that policy in the direction that a cumulative exposure in duration of services over the year was really the right direction. There’s a great blog that Colleen Rathgeb wrote back in 2016 about this experience that I can provide so that you can provide it in the show notes.

Another sort of different kind of example is around – or a related example is around the policy on classroom quality and accountability, so what were the thresholds and how much quality do you need for positive child outcomes. So Head Start used FACES data along with NICHD-funded research and pre-K information to sort of understand what is the necessary minimum standard for quality. Then after that, how does that look to improve outcomes. What I learned from talking to them is really that that also then was refined through that continuous quality improvement framework of when it was implemented and what does that look like. So that’s a great example of where often research isn’t the only piece of information you’re including but really combining it with other information.

They also shared examples about the research on early brain development and how that Head Start learned how to think about early math and how to engage kids in early math concept development, which all of that developmental science literature went into that early learning outcomes framework that then they built a tool that then facilitated teachers using that framework, which is all based in science, to do lesson planning. So to me, this is a great example of a different kind of research use that Cynthia Coburn and her colleagues talk about, passive use. So that’s when teachers are using research but may not be aware it’s research; but because it’s embedded in their day-to-day processes and routines, that it’s really an effective way to get research used.

Then I’ll just share one more example. We have a rich portfolio in American Indian and Alaska Native Head Start programs, which has grown and strengthened. Hopefully, you all saw the recent videos we released. Native teachers and program administrators and the Office of Head Start Region 10 director sharing how they used data from AIAN FACES, along with some of the information we developed through that project on the Cross-Cultural Understanding and Cultural Humility Project, to make sure that we were engaging with Head Start AIAN programs with the idea of centering culture in tribal communities in partnership along the way. Those videos really call out how valuable that data is to their communities to understand their children, to do observations of program quality, and really embed that within that cultural context.

So, I mean, I could go on and on with more and more examples; but I think this just shows that clear pattern of Head Start over the decades understanding and attending to the importance of research to help meet their mission goals. So I really agree with the White House blog that they have really had that commitment and the model I think we can build from.

 

[J.B. WOGAN]

A couple things you said I just want to flag for listeners. So you mentioned a blog series that is on the Office of Planning, Research, and Evaluation’s website, I believe. We can absolutely link to that series so people can check out some of the other things you’ve been writing about – the use of research evidence in policy. Then, you mentioned FACES. For people who don’t know what FACES is, what’s the acronym and what is FACES?

 

[LAUREN SUPPLEE]

Yeah, so that’s the Family and Child Experiences Survey, which is a survey we have been doing I believe since 1997. We do a representative sample of Head Start programs and kids and parents to understand how the kids are doing, how the parents are serving parents. It’s really invaluable information for Head Start to get a snapshot picture of how the programs are doing in meeting their objectives.

Then that data we provide and archive it at the University of Michigan’s ICPSR -- don’t ask me what that acronym means -- and that allows the secondary analysis. So that’s really sparked a lot of other research around how Head Start kids and families are doing.

 

[J.B. WOGAN]

Okay, great, and one thing I noticed when I was reading the blog, I don't think I realized that Head Start started as a demonstration and scaled up. I thought Head Start was created, and then there was a national program across the country. But it sounded to me that that’s another way in which it’s sort of a model, is that you used it to do a demonstration; you gather evidence of effectiveness; and you iterate and expand over time. Is that a fair characterization of what’s happened with Head Start?

 

[LAUREN SUPPLEE]

I mean, certainly it was a summer demonstration that scaled up very quickly. I think there are other ACF programs that also start in demonstrations. I think Early Head Start also started very small and then very quickly expanded. Then there are links to Healthy Marriages, Responsible Fathers, and grants in the Office of Family Assistance. They also started with a demonstration but are still technically called “demonstrations” in the legislation.

But one of the ways we talk about the integration of research evaluation and data is that word “demonstration” to us, and I think to the Government lawyers, means an element of learning and continuous improvement so that that is a mechanism by which we can say this is why having a robust portfolio in these areas matters.

 

[EMILY MOIDUDDIN]

That’s a really nice segue I think to another question that we have. The breadth of examples that you’ve given show a lot of necessary – or show intentionality in terms of the research itself that’s being implemented and then how it is being drawn on to inform improvement, change, whatever it might be, at different levels. All of that can be hard to do, getting back to the intentionality.

So the Blueprint recommends that agencies reduce barriers to using social and behavioral science in decision-making and practices. So in OPRE’s experience, what are the most common barriers or –

sort of flipping it -- what are the intentional things people need to do; and could you give an example or two of how those barriers have cropped up in OPRE’s work?

 

[LAUREN SUPPLEE]

Yeah, sure, I mean certainly going back to my prior comments about I think evidence is used far more frequently than maybe is captured. So sometimes, yes, those barriers exist. So the timeline of research production is often misaligned with policy. I think we have to stop thinking about research use when a report is finished. Evidence or research is used when someone has a question that needs to be answered. So then it really turns to what are those intentional opportunities that OPRE really spends a lot of time and effort on in those moments, right? So when is that policy window opening, and how can we integrate research at that moment?

So that might be the time and intention we spend in being a trusted resource for research to inform that question, to be seen by our program partners and good listeners who understand the context, who understand how to work collaboratively with them to apply the information. I think our mission, as I said, is pretty unique in that we are there to advise ACF on a program – on improving efficiency and effectiveness of ACF programs. So that means everything we do has to be related to quality-relevant evidence production and the facilitation of its use.

So I think – I think very carefully about how we are intentional within existing individual and organizational routines. Like I mentioned that the teacher support in the early – delivering outcomes framework is that’s a good example of a routine at an individual level. Instead, we have routines at ACF levels; so that’s like our evaluation policy; it’s a key piece of infrastructure that has these conditions of facilitate research use at its core. So that’s where the idea of creating relevant research is really central. We also have these systems to build trust in relationships.

But we also then find ways to integrate OPRE into regular Government routines, so that budget formulation processes. When we have policy formulation or decisions about implementation or regulations being developed, because we have those trusted relationships we can integrate ourselves into that process to be at the table and to contribute how research might help strengthen that particular decision or maybe raise new questions that have to be answered.

So I am constantly thinking about how we can continue to really be intentional about the infrastructure we’re putting in place at ACF to help us meet our mission. I hope that we actually have more examples of facilitators than barriers.

 

[J.B. WOGAN]

That’s great – facilitators rather than barriers. That’s awesome!

So I had a question about what the Blueprint means for OPRE’s work because you guys are such leaders already in the use of social science and public policy and applying it for social policy. So I wonder to what extent does it just affirm what you’re already doing, and to what extent does it call on you to change business as usual?

 

[LAUREN SUPPLEE]

 

Yeah, so I mean just to call out a little more specifics of what the Blueprint talks about, they’re encouraging agencies to develop strategies to build durable evidence ecosystem. I have to say I love the term “evidence ecosystem.” It’s something that we’ve been talking a lot about in this Transforming Evidence Network, which is an international interdisciplinary network of people who sit at this intersection of research and policy because we are all sort of creating evidence ecosystems. So the Blueprint wants this evidence ecosystem to promote meaningful engagement within and external to the Government, the capacity of the workforce related to social behavioral sciences, and facilitate the generation of new evidence.

So these are many of the things that OPRE has and will continue to make investments in, in that infrastructure I was just talking about. We are really in a good spot in that we are one of the oldest evaluation offices in the Government in the sort of social sector. So as such, we’ve benefitted from the time and experience it takes to develop that portfolio and those strategies and those relationships. So from that standpoint, I think many of the actions in the Blueprint are things that we already have that I think are roadmaps for other agencies.

Actually earlier this year, I and colleagues in some of the other longstanding offices wrote a blog about what it takes to build that infrastructure and how long that road actually is. We have been here 30 years come next year and there are offices of 15 years, and really saying this doesn't happen overnight but it can happen.

So I mentioned earlier our evaluation policy. We actually were the second Government office to create one; it was in 2012. I think that has really centered on work at ACF, but then we updated it in 2021 to more explicitly include language around centering lived experience and answering more questions about what works for whom under what conditions.

Then, I think the Evidence-Based Policy Making Act in 2018 called out the importance of learning agendas as another piece of infrastructure to build that partnership base relevant work. We started doing that before the Evidence Act. Maybe you were some of the model for it, but we do all of our budget formulation in partnership with our program offices. You may have seen we recently released a learning agenda on our welfare and self-sufficiency work; and that really is, I think, a true model of a co-produced, intentional, long-term research plan that will help structure the work going forward and make sure it’s useful to the Office of Family Assistance.

In another example, our Evaluation Policy Center, transparency is a principle. We have upheld that for many years applying open science activities; and even before the open science had been kicked off, I remember someone calling me and saying, “I want to talk to you about this open science.” I thought, “Well, we already post everything on line. We archive our data. We post all the measures. I think we’re doing a lot of these things already.”

So I appreciate the road map that the Blueprint provides, and it can be a resource for us to point to, to say why what we’re doing matters. But I think we see our role as really then saying, “What’s next?” So how do we then take what we’re doing and strengthen it for the future? For example, we have prioritized hearing from lived experience in our work; and then at the same time, we had a recent project where we had an actual community advisory board to advise us internally about our own processes to make sure that we were following internally as well as externally what we’re saying we want to prioritize.

Our new ACF data strategy is another key document to say, okay, this is where we need to go in the future to build ACF’s ability to really strengthen evidence production and use.

 

[J.B. WOGAN]

Okay, and were there any recommendations in the Blueprint that you’re exploring as a result of the Blueprint? Like were there any examples where you said, “Oh, that’s a good idea; that’s something we could do more of,” or perhaps the things you just listed as what’s next were examples of that. But I was just curious if there was anything where you’d been inspired at all by the Blueprint.

 

[LAUREN SUPPLEE]

I mean, nothing specifically comes to mind. I think it really is that thinking about, okay, this is where we are. Really, the Blueprint is a snapshot of where we are now. So then how do we advance that? I would like to think because we’ve been around so long and we have that history that we can serve as that model to say, “Where should we be going forward?” So it’s really to us, I think, much more of a launching point than called-out gaps in what we’re already doing.

 

[J.B. WOGAN]

Gotcha.

 

[EMILY MOIDUDDIN]

So our next question really gets at some of what you were raising here about this connection between history and where we’re going moving forward. You mentioned, for example, the timeline of the Evidence Act, where roughly it’s a 5-year anniversary of that being signed into law and just about the 30-year anniversary of OPRE. So at least with respect to social policy, how has the role of evidence changed over the last 30 years or so; and how do you expect it to change going forward?

 

[LAUREN SUPPLEE]

Yeah, I love this question. In writing a blog recently about our birthday and getting ready for us to turn 30, I was really reflecting on how the role of evidence and evidence use has changed over those 30 years. So part of when I was writing that blog, I went back to the book, “Fighting for Reliable Evidence,” by Howard Rolston and Judy Gueron. They talk about the importance of fighting at that time for the acceptance and normalization of randomized controlled trials in social policy research, which now is just an accepted tool in the overall evidence production toolbox.

In 1993, you have the Government Performance Results Act. Really, it’s an initial stab at trying to get agencies to set goals and measure results and report on their progress. So that, along with the AFCC waivers around that time, I think were catalysts to that. It was really the mid-‘90s where the role of evidence really started to take off. So we could see there that the term “random assignment” started to be used by members of Congress and in multiple administrations, which was really interesting to see.

I think there were other parts that happened around the sort of early 2000s that also had an emphasis. So there was a real emphasis there around – a factor in changing things and how they went. There was evidence on independence and sort of putting things at arm’s length. So OPRE does not have the word “policy” in our title. We are not a policy office; that is intentional. I also think that though in the early stages that sort of separation may have reduced the relevance of some of the work.

So I think we’re now shifting in this space where how do we maintain our independence as important but also continue to make sure that we’re doing relevant work that is for our program partners. I think some of that arm’s length also led to some mistrust and resentment in some social policy programs about research. We always probably told them what they were doing wrong, and that didn’t really engender trust.

So I think that’s where a lot of that relationship building I mentioned really is key. In the mid-2000s to the Bush Administration and then the Obama Administration then really started directing resources to evidence in decision-making. So that was in certain grant programs where evidence was tied to decisions about funding. That was when many of the systemic reviews started to emerge, like What Works Clearinghouse, and standards of evidence in fields started to emerge.

I think we responded and started to grow. Then our office – when I – this is my second tour of duty in the office. When I left, there were about 45 people and we returned to almost triple in size. That is because the role of evidence in Government has just become very solidified. Programs understand why it matters. They understand they need to measure these things to understand what’s going on, which is really exciting to see.

Then the bipartisan foundation for Evidence-Based Policymaking Act of 2018 really was another launching point. As you mentioned, Emily, about the five-year mark. But it really allowed OMB to really call for a broader definition of “evidence,” to call for more investment in data infrastructure and technology, thinking about new methods. I think it was around this time that really evidence became part of core government operations, and civil servants saw it as something that really was part of their job – which just again, another launching point to shift into the solidifying of evidence as like a core important operation.

Then of course now this current Administration has multiple executive orders and White House memorandums on scientific integrity and indigenous knowledge and the use of data to reduce inequality. So we’re continuing to respond in looking at participatory approaches in this new data strategy I mentioned, and how we build like local grantee capacity to use data and formulate research questions to strengthen their own practice.

I do want to also mention – we were talking about sort of the future. You may have seen there was a new HHS scientific integrity policy that was released a couple weeks ago. One of the reasons why I think this is particularly exciting is that it defines “science” very broadly. So it includes the many different ways social and behavioral science is generated and used and now includes important protections to ensure that those findings are acceptable and used and cover the development of science, the dissemination of science, the professional development of Government scientists. So I think this could be another sort of pivot point to help think about the broad ways that we can incorporate science into policymaking.

Then I know you asked about sort of where we’re going. So at least what I’ve been thinking about for the next 5 to 10 years is continuing to draw on the research on research use on the science around how and why evidence is used in policy and practice. That means deepening our own understanding of how our work is achieving that goal, integrating that science into how we do our work, sort of continuous improvement internally.

It also means, I think, deepening our role as intermediaries in the Government -- so that person that sits between the research and policy -- but also better understanding intermediaries in the field, so for example, technical assistance providers. The Government spends a lot of money on technical assistance provision, and I don't know how much we understand how they do their work and how we could integrate science into their work. So the more we can study those systems and then develop those similar to what I was talking to you about before about that app – a routine or a way that we can build science into their work. I think that’s an exciting frontier.

Then finally, I would be remiss if I didn’t mention artificial intelligence. I think it’s both going to be a tool that we can use to fulfill this. I see that at some point we could use it to simplify the decades of research on a topic so that we could rather than waiting to write a long literature review, we could have a quick synthesis of OPRE’s research over 30 years when that policy window is open. But I also know that it’s something that we have to think about how to evaluate its use in the field, and I think it has some unique aspects to it that are different than the social program that we’ve studied for many decades. I don’t have an answer yet, but it is certainly is top of mind.

 

[J.B. WOGAN]

I was wondering if AI and generative AI would come up in that answer, but glad you spoke to it – certainly something we’re thinking about a lot with the future of Mathematica’s work.

So you’ve mentioned before that you have a blog series, and I wanted to flag there’s a series of blogs about the use of evidence which incorporates research-based insights on how to ensure evidence gets used. So it’s kind of evidence-based advice about the use of evidence. Given what you know about best practices on – one word for this, I think, body of work is “translational science.” I think I’ve also seen “translational research.” There are probably other names for it too. But given what you know about best practices based on translational science, are there elements of the Blueprint that you found especially heartening because they’re applying evidence-based practices and how to ensure that the evidence is useful and gets used?

 

[LAUREN SUPPLEE]

Yeah, and thanks for highlighting my blog series. I’ve written that series to make sure that people are aware that there is this empirical body of research. I think often researchers, while we might bemoan when someone uses an anecdote to justify a policy, we were most of the time using anecdotes to justify our own sort of practice in dissemination of science. So I wanted to highlight that there is a science out there, and then how do we apply it.

So before I talk about the Blueprint, I want to highlight for folks that are listening the body of research we’re talking about here is fascinating to me because It doesn't matter whether we’re talking about social policy, international relations, climate science, or education, the findings have been very consistent about what it takes to get research used, which is it’s shared in the context of a trusted representative -- it’s like a theme on this podcast, and it could be with an individual or a trusted organization; that the work is relevant, timely and topically relevant to the decisions being made – again, something else that I’ve talked about a lot today; and then routines, embedded in the existing routines. I think often researchers would step in and say, “I’ve created a tool for you to use research.”

But then if it’s not how someone did their job, it’s unlikely to be used – so really stopping and better understanding how people do their work and then how could we integrate science into that. So it’s really about human behaviors and organizational systems rather than disciplines.

So to go back to the Blueprint, I will say it’s not specific to the Blueprint. They actually highlight a lot of things in there that are also in the Evidence Act and the OMB Guide related to that. But you can see threads of the relational partnership-based approach to science generations; planning and using those learning agendas to co-produce, to get relevant research over time to build that trust. They mention research practice partnerships – another way of co-producing the search together. The idea of incorporating lived experience is a process of infrastructure that really gets at that relevance in the relationships.

I think thinking about how to incorporate it into – we talked about it before – the (inaudible) formulation, the policy generation process. Those are all pretty regular government routines. So how do those routines happen? How can we find ways to integrate science into those I think is really important. The Blueprint does actually use the word “ecosystem” and “institutional infrastructure.” I think it’s great to see. They talk about the incorporation of behavioral and social science as a program and policy through the conceptualization of a program into implementation; and that’s another way of thinking about how do we build those relationships, the relevant work, into those routines throughout what we do.

I think the Blueprint also mentions external entities, such as technical assistance providers. I’ve mentioned already, there are key intermediaries sitting between research and practice. So we ourselves have started to study our own practice in this way. We have started to try to understand how TA providers are using evidence. We try to understand how local grantees are using practitioner-based products that we’re putting out. And then our idea is start to feed those back into how we do our work.

So I think that if we can ground ourselves in those concepts from that field of science of research use, we can see the thread throughout all of these examples in the Blueprint but also in a lot of the wider government conversations about how to get evidence used. It’s really just about lifting that up and deepening that practice, in my opinion.

 

[J.B. WOGAN]

So what you’re saying reminds me of a conversation I had once with a friend of the pod, Jenni Owen in the Offices of Partnerships in North Carolina. She was saying that – she works for, I guess, up through then this year it will be the Governor Roy Cooper in Carolina. She was saying that sometimes his office will get e-mails from academics who will say, “You might be interested in this research,” and then a link to the paper. What would be more helpful is if there was a message that had greater context that said, “You might be interested in this research because I know that you’re working on X policy,” whatever it is, maybe something like preventing opioid overdoses, “and I have a new paper on the effective of naloxone. These findings could be informative for the specific initiative that you’re developing right now.” So it gets to like the relevance and the timing.

Then I think her office has also do work around salvaging those trusted partnerships; so it’s not just a one-off email from an academic, doesn't know the Governor’s Office, but actually could have a relationship – an ongoing relationship.

 

[LAUREN SUPPLEE]

Yeah, Jenni is a friend and colleague. She is also a key partner in that transforming evidence – in that work I mentioned. She is very well-versed in that same base of science. So it doesn't surprise me that those same themes came up in how she has thought about designing her office.

 

[J.B. WOGAN]

All right, well, any closing thoughts – anything you want to leave the listeners with?

 

[LAUREN SUPPLEE]

I mean, it’s been a great opportunity. Thank you so much for this opportunity.

I guess I would love if people can go to some of those blogs and start to really think about these three elements that you’ve just lifted up – the relationships, the routines, the relevance – because they’re simple concepts really. But all of the research right now is pointing to them as really an important foundation for getting research use infused in policy and practice. If we deeply understand them and start embedding them in how we think about our work, I think it could have the potential to really transform the role of science in policy and practice.

 

[J.B. WOGAN]

The three themes were with that the routines, relevance, and...

 

[LAUREN SUPPLEE]

Relationships.

 

[J.B. WOGAN]

...relationships great. All right, that’s a great note to end on.

 

Lauren, thanks so much for speaking with us today.

 

[LAUREN SUPPLEE]

Yeah, thank you so much for your time.

 

[J.B. WOGAN]

Emily, thank you for joining us as a co-host today.

 

[EMILY MOIDUDDIN]

It was a lot of fun.

 

[J.B. WOGAN]

Thanks again to our guest, Lauren Supplee; and thanks to Emily Moiduddin for joining me as a co-host for this episode of On the Evidence, the Mathematica podcast.

In the show notes, we’ll link to resources referenced in the conversation; so be sure to check those out.

This episode was produced by my Mathematica colleague, Rich Clement. If you liked the show, please consider leaving us a rating and review. We’re on YouTube, Spotify, Apple Podcasts, and elsewhere. It helps others discover the podcast. To catch future episodes, subscribe at Mathematica.org/ontheEvidence.

New to Mathematica’s On the Evidence podcast? Subscribe for future episodes. If you like this episode, consider leaving us a rating and review on Apple Podcasts to help others discover our show.

Show notes

Read the White House Office of Science and Technology blog announcing its Blueprint for the Use of Social and Behavioral Science to Advance Evidence-Based Policymaking.

Read Lauren Supplee’s blog about measuring whether and how evidence is used.

Read Supplee’s blog about cultivating more “knowledge brokers” in social policy research who translate complex data into action-ready insights.

Read Supplee’s blog about establishing systems to support the use of evidence.

Read a blog by Colleen Rathgeb, the former director of policy at the Office of Head Start and current associate deputy assistant secretary for the Office of Early Childhood Development, about research showing the need for full-day, year-round Head Start programs.

Explore the ELOF 2 Go mobile app, the free online tool Supplee references that supports teachers who want to access and learn more about the Head Start Early Learning Outcomes Framework (ELOF).

Watch the video series referenced by Supplee that shares the perspectives and experiences of those who are involved in obtaining and using data from the American Indian and Alaska Native Head Start Family and Child Experiences Survey.

About the Author

J.B. Wogan

J.B. Wogan

Senior Strategic Communications Specialist
View More by this Author