My latest paper with Li Qian Tay and colleagues, “A focus shift in the evaluation of misinformation interventions”, has just been accepted for publication in the Harvard Kennedy School Misinformation Review. The abstract for the paper is below:
The proliferation of misinformation has prompted significant research efforts, leading to the development of a wide range of interventions. There is, however, insufficient guidance on how to evaluate these interventions. Here, we argue that researchers should consider not just the interventions’ primary effectiveness but also ancillary outcomes and implementation challenges.
My latest paper with Matthew Andreotta and colleagues, “Evidence for three distinct climate change audience segments with varying belief updating tendencies: Implications for climate change communication” has just been accepted for publication in Climatic Change. The abstract for the paper is below:
Mounting evidence suggests members of the general public are not homogenous in their receptivity to climate science information. Studies segmenting climate change views typically deploy a top-down approach, whereby concepts salient in scientific literature determine the number and nature of segments. In contrast, in two studies using Australian citizens, we used a bottom-up approach, in which segments were determined from perceptions of climate change concepts derived from citizen social media discourse. In Study 1, we identified three segments of the Australian public (Acceptors, Fencesitters, and Sceptics) and their psychological characteristics. We find segments differ in climate change concern and scepticism, mental models of climate, political ideology, and worldviews. In Study 2, we examined whether reception to scientific information differed across segments using a belief-updating task. Participants reported their beliefs concerning the causes of climate change, the likelihood climate change will have specific impacts, and the effectiveness of Australia’s mitigation policy. Next, participants were provided with the actual scientific estimates for each event and asked to provide new estimates. We find significant heterogeneity in the belief-updating tendencies of the three segments that can be understood with reference to their different psychological characteristics. Our results suggest tailored scientific communications informed by the psychological profiles of different segments may be more effective than a ‘one-size-fits-all’ approach. Using our novel audience segmentation analysis, we provide some practical suggestions for how communication strategies can be improved by accounting for segments’ characteristics.
My latest paper with Douglas MacFarlance and colleagues, “Reducing Demand for Overexploited Wildlife Products: Lessons from Systematic Reviews from Outside Conservation Science,” has just been accepted for publication in Conservation Science and Practice. The abstract for the paper is below:
Conservationists have long sought to reduce consumer demand for products from overexploited wildlife species. Health practitioners have also begun calling for reductions in the wildlife trade to reduce pandemic risk. Most wildlife-focused demand reduction campaigns have lacked rigorous evaluations and thus their impacts remain unknown. There is thus an urgent need to review the evidence from beyond conservation science to inform future demand-reduction efforts. We searched for systematic reviews of interventions that aimed to reduce consumer demand for products that are harmful (e.g., cigarettes and illicit drugs). In total, 41 systematic reviews were assessed, and their data extracted. Mass-media campaigns and incentive programs were, on average, ineffective. While advertising bans, social marketing, and location bans were promising, there was insufficient robust evidence to draw firm conclusions. In contrast, the evidence for the effectiveness of norm appeals and risk warnings was stronger, with some caveats.
My latest paper with Li Tay, Tim Kurz, and Ullrich Ecker, “A comparison of prebunking and debunking interventions for implied versus explicit misinformation,” has just been accepted for publication in the British Journal of Psychology. The abstract for the paper is below:
Psychological research has offered valuable insights into how to combat misinformation. The studies conducted to date, however, have three limitations. First, pre-emptive (“prebunking”) and retroactive (“debunking”) interventions have mostly been examined in parallel, and thus it is unclear which of these two predominant approaches is more effective. Second, there has been a focus on misinformation that is explicitly false, but misinformation that uses literally true information to mislead is common in the real world. Finally, studies have relied mainly on questionnaire measures of reasoning, neglecting behavioural impacts of misinformation and interventions. To offer incremental progress towards addressing these three issues, we conducted an experiment (N = 735) involving misinformation on fair trade. We contrasted the effectiveness of prebunking versus debunking and the impacts of implied versus explicit misinformation, and incorporated novel measures assessing consumer behaviours (i.e., willingness-to-pay; information seeking; online misinformation promotion) in addition to standard questionnaire measures. In general, we found debunking to be more effective than prebunking, although both were able to reduce misinformation reliance. We also found that individuals tended to rely more on explicit than implied misinformation both with and without interventions.
My latest paper with Adam Osth, “Do item-dependent context representations underlie serial order in cognition?” has just been accepted for publication in Psychological Review. The abstract for the paper is below:
Logan (2021) presented an impressive unification of serial order tasks including whole report, typing, and serial recall in the form of the context retrieval and updating (CRU) model. Despite the wide breadth of the model’s coverage, its reliance on encoding and retrieving context representations that consist of the previous items may prevent it from being able to address a number of critical benchmark findings in the serial order literature that have shaped and constrained existing theories. In this commentary, we highlight three major challenges that motivated the development of a rival class of models of serial order, namely positional models. These challenges include the mixed-list phonological similarity effect, the protrusion effect, and interposition errors in temporal grouping. Simulations indicated that CRU can address the mixed list phonological similarity effect if phonological confusions can occur during its output stage, suggesting that the serial position curves from this paradigm do not rule out models that rely on inter-item associations, as has been previously been suggested. The other two challenges are more consequential for the model’s representations, and simulations indicated the model was not able to provide a complete account of them. We highlight and discuss how revisions to CRU’s representations or retrieval mechanisms can address these phenomena and emphasize that a fruitful direction forward would be to either incorporate positional representations or approximate them with its existing representations.