Editorial | Jul 29, 2020

      The importance of science-based decisions

      An excerpt from Scientific Update 10

       

      TIME TO READ: 5 MIN

      By Dr. Moira Gilchrist, VP Strategic & Scientific Communications

      Evidence-based decision-making is key when objective facts are used to determine the correct decision to make. I’m writing here about tobacco harm reduction specifically, but any decision worth making is worth making right. And those decisions should always be made in the context of the totality of available evidence.

      Humans can be terrible decision makers, especially in the heat of the moment and when tensions are high. Scientists and regulators are only human too, and they have the same weaknesses as anybody else when it comes to decision-making. This is why philosophers have developed various guidelines for drawing conclusions and making decisions. No two people see the world the same way, which is why it’s so important to gather data using methods and tools everyone can agree on. That’s also why it’s important to debate the data openly and transparently, and to do so while being clear about both implicit and explicit biases. It gives us all common context to work from.

      "Sufficient data must be collected, and necessary questions must be asked, to ensure the researchers accurately rule out confounding factors."
      - Dr. Moira Gilchrist

       

      Appropriate and validated methods

      Research is – and should be – deliberate. A good scientist defines their hypothesis and selects their experiments, tools, and methods deliberately. They make choices that ensure they are measuring what they intend to measure, so that the data they collect is complete enough to make sound conclusions. Solid data lets other researchers accept those conclusions and use that data as a comparison or a launching pad for their own research.

      Close-up view of a laboratory microscope.

      For example, one can conduct a survey of current e-cigarette users, track the health of this population over time, and determine whether these consumers have an increased or decreased health risk of some kind, relative to other populations. But sufficient data must be collected, and necessary questions must be asked, to ensure the researchers accurately rule out confounding factors. Many e-cigarette users are also either former or current smokers, and that smoking history is well known to carry increased health risks. Using appropriate methods and statistical analysis can help to clarify whether and how each factor might contribute to increased risk, or to risk reduction.

      One deliberate choice that scientists can make is to transparently publish their methods. This is usually done within a research paper, though some researchers take transparency to the next level. Data analysis programs can be published online for others to freely explore. In some cases, scientists can rely on publicly available and validated methods or standard protocols for things like measuring Nicotine-Free Dry Particulate Matter (NFDPM) and using well-known “smoking regimes” for collecting and comparing aerosols in a reproducible way. It’s also important to have accepted standards that help contextualize results, such as regional or national guidelines for chemical exposure, or limits on the presence of certain chemicals in the context of indoor air quality.

       

      Transparency and scientific discussion

      When a company, a scientist, or a regulator acts with transparency, it’s a signal that their actions and their intentions match their words. Transparent actions build trust by inviting and withstanding scrutiny, even from the harshest of critics.

      But what does it actually mean to act transparently?.

      There are some great ways to make research more open and transparent. For example, many scientists provide the analysis code and even the raw data behind their study so others can analyze the same data and come to their own conclusions. Indicating whether and why data was excluded, and explaining how the study’s sample size was selected are also important for transparent reporting of results.

      Our INTERVALS.Science platform enables independent data re-analysis and collaboration by openly sharing protocols, tools, and research data. We also publish our assessment of our products openly in scientific journals, especially because of the peer-review process. In fact, we prefer to publish our work in journals that provide an open access option as often as possible. We also respond openly to comments on our publications.

      Transparent actions like these all make it easier for critics to scrutinize our work. That scrutiny is welcome. We are confident that our work and results are built on a strong foundation of evidence – so confident that we’re willing to share that foundation. But we also recognize that there may be gaps in our research that others can see more easily.

       

      Addressing bias in research

      Everyone has a bias of some kind. When a manufacturer of any product conducts or funds scientific research on it, there is always a potential for bias, whether it is food, pharmaceuticals, or smoke-free products. This is why we require researchers to acknowledge our involvement when they present or publish results of studies we have financially supported.

      Independent researchers can have a bias too: in favor of their funding organizations, their research collaborators, or even their own personal opinions. The scientific method calls for a hypothesis in order to design the experiment, and we humans prefer to be right. Confirmation bias is the devil on a person’s shoulder, making it difficult to accept the evidence that shows their hypothesis may be wrong or that someone else’s hypothesis might be right.

      Don’t get me wrong: a little healthy competition is actually great for pushing science forward! But antagonistic factions with an “us vs them” mentality have no place in the scientific community. With that in mind, it’s even more important for every scientist – industry and independent scientists alike – to use validated methods, to share their research openly, and to invite discussion and feedback. Individual studies, no matter how well they’re designed, can only ever tell part of the story.

      Word evidence typed on a typewriter.

       

      Big decisions require big-picture thinking

      Studies that have strong, thorough, and well-validated methods should be weighted more impactfully than those using weak or poorly described methods. Studies where the authors or funders have known biases should face scrutiny, but they should not be excluded on that basis alone. Where several different groups of researchers have presented similar data or come to similar conclusions, that reproducibility gives them a stronger collective voice in the discussion, provided that the data are of high enough quality.

      This is why it’s so important to consider all the available evidence, to weigh each bit of research by the extent to which it contributes to the current understanding of the topic. Each published study is one tiny window to the world, with its own angle and frame. It takes many such windows and an understanding of their perspective on the world to get a clear picture.

      Updates on PMI Science

      Read the Scientific Update magazine

      The Scientific Update magazine is focused on PMI's research and development efforts, milestone studies, industry regulations, and more. View the latest issue, or read the articles online.