Does it really matter how we reach a scientific conclusion? If a recommendation is based on good practice, and is easily understood, what difference does it make if the general public understands how we got there? Potentially, quite a lot. Jaime Anne Earnest is a researcher, policy analyst and science writer who explores the intersections of the epidemiological, psychosocial and bioethical. Having just completed her PhD studies, partially based in the Institute, Dr Earnest tells us about her research looking at the relationship between scientific methods and behaviour change.
Do Methods Really Matter?
How the Public Perceives Computational Models May Affect Behaviour
The refrain is becoming familiar: media matters. The public matters. The scientific community’s ability to work in tandem with a diverse array of characters—from journalists on the science beat to legislative committees—matters. Training on how evidence makes its way into scientific policy and workshops teaching early-career researchers to write press releases abound. Public engagement is becoming as salient a concern on grants, and in talks, as actual data. Regardless of your discipline, part and parcel of becoming a good scientist or researcher is understanding that there is a relationship between the life in our labs and the lives of people on the street.
“Public engagement is becoming as salient a concern on grants, and in talks, as actual data.”
It’s often discussed that so much of what’s mediating this relationship is based on the concept of trust: whether the public trusts us, or trusts our findings, plays a significant role in whether or not they’ll act on recommendations that can impact their well-being. Whether the risks we’re addressing are individual like chronic disease, or collective like our changing climate, so much of the meaning of what we do as researchers depends on whether the public is willing to act on the suggestions we make.
With the movement toward evidence-based policymaking increasingly demanding collaboration between scientists and policymakers, and the stakes for public behaviour change so high, it’s critical that we develop a more sophisticated understanding of how the public perceives both what we suggest and the underlying processes of how we come to our conclusions. It’s all well and good to tell people to wash their hands during an outbreak of norovirus, but if they don’t have confidence in how we came to this recommendation in the first place, how likely are they to do what we ask? “Because Science.” is always a tempting answer, but it may not be sufficient to get the optimal outcome we’re seeking.I decided to investigate the relationship between scientific methods and behaviour change with a study. In collaboration with Dr Rebecca Mancy from the Institute, and Dr Kate Reid and Dr Alexia Koletsou from the College of Social Sciences at the University of Glasgow, we found that the methods used to underpin public health policy (scientific modelling, experiments, and the advice of experts, etc.) may have an impact on the public’s projected behaviour changes, ranging from recycling for climate change mitigation to avoiding contact during a pandemic.
We argue that, if the public lacks confidence in the methods used by the public health community to make recommendations, individuals may be less likely to enact them, potentially undermining the success of the policy. This is fundamental information for researchers and policy stakeholders, as well as health communicators and educators, to consider when designing, teaching, or promoting health policy that requires behaviour change. This is particularly true when those policies are underpinned by evidence that comes from the use of methodologies unfamiliar to the public, such as computational modelling. How people understand modelling, and whether they have confidence that its increasingly common use is a part of “good science”, can influence their projected behaviour change, and stop a solid evidence-based policy recommendation in its tracks.The good news is that more standard, or traditional, methods of inquiry (such as experiments), generally received a more positive response, recognising that they are a part of what can be considered “good” scientific practice. This indicates that people are likely to have confidence in a finding and recommendation from a more readily recognised scientific tool. The bad news is that few evidence-based policies are based on findings from the experimental literature; particularly in epidemic mitigation policies, modelling methods are becoming the norm.
As we’re expanding and redefining the role of ‘the scientist’ in public life, it will serve us well to remember that it is not simply our findings and outcomes that we need to be vigilant about communicating. We need to advocate increased public understanding of scientific findings, certainly, but also increased scientific literacy around the means we use to gather data and predict. When it comes to making a difference with our data, methods really do matter.