This week’s SAPPHIRE Spotlight profile will focus on the very exciting work of Dr Emmilie (Emma-Louise) Aveling. Emmilie is a Research Fellow in the SAPPHIRE group, who specialises in applied qualitative research in the fields of global health and healthcare quality and safety. Emmilie is currently based at Harvard T.H. Chan School of Public Health, where she is a Visiting Scientist (2015-2017). Emmilie trained in social and cultural psychology and originally went on to work in a number of non-governmental and social welfare organisations in the UK, Africa, Cambodia, and Australia. Emmilie then entered academia and her current research focuses on the processes shaping the implementation and outcomes of health interventions in low- and high-income countries.
One of her most recent projects was a comparative study looking at implementation and use of the WHO Surgical Safety Checklist in operating theatres in English and sub-Saharan African hospitals. This week we are launching a report written by Emmilie along with her colleagues Mary Dixon-Woods (University of Leicester), Peter McCulloch (University of Oxford), Yvette Kayonga (Catholic University of Rwanda), and Ansha Nega (Gondar University). The report presents seven key lessons to support the implementation of safe surgery checklists in diverse settings. Emmilie very kindly answered some questions for this blog, and her answers give a real insight into the complexity and importance of her work in general and this study in particular.
Q: You were involved in a lot of really interesting work before entering academia – what was it that made you want to go into research? Does that background inform your attitude to projects or give you a particular perspective, in particular with regards to your most recent work on the Safety Checklist?
A: I think what motivated me to go into research was the desire to understand why, despite the best intentions and a lot of hard work, interventions that aim to improve people’s health and well-being too often fail to achieve their aims. I thought research would be one productive way to develop a better understanding of what makes interventions work – whether that be community-based interventions with marginalised groups in London, interventions to reduce HIV transmission in Cambodia or interventions to enhance the quality and safety of healthcare in Africa, Europe or America.
What’s increasingly clear is that this entails developing how we conceptualise, describe and evaluate interventions. For example, in project proposals and reports, interventions often appear very orderly and neat: it’s clear what was done, it’s clear what happened, and a fairly linear cause-effect relationship is implied. Yet my experience working with practitioners outside of academia and later during ethnographic research is that the reality of what happens during interventions is anything but neat: it’s messy, complex, a constant process of negotiation, of engaging and listening and persuading, of responding to the unexpected and adapting to contextual contingencies, and attempting to navigate competing demands, priorities and interests – one’s own and those of the many groups and individuals involved. I’m interested in understanding that ‘messy’ bit: in concrete instances, how can we accurately describe it? And more generally, how ought we to conceptualise it and what does that mean for how to approach evaluating and understanding whether and why an intervention worked (or not)?
So I guess one thing that informs my perspective is a bit of suspicion when any intervention that has succeeded in changing practice or behaviours – in professional settings or everyday life – is described as ‘simple’. With the surgical safety checklist, while the checklist itself might be quite a simple tool, getting healthcare practitioners to use it consistently, in full and in a way that enhances subsequent care for patients may be anything but simple. As my friend and colleague Flora Cornish puts it, “Interventions do not just work automatically, they have to be made to work”.
Something else I take from my experiences outside of academia is a deep respect for those who get out there and try to make change – roll their sleeves up and take the risks in trying – and are not only engaged in ‘interpreting’ the world, as Karl Marx put it. They also gave me an awareness that what’s being discussed and promoted within academic research circles does not always reach those who are engaged in trying to make change. It’s one thing to do good quality research, it’s another to make good use of it. The latter is what troubles me most as an academic researcher – is my research useful to others? That is what motivated this report: we wanted to share the lessons we had learnt about checklist implementation with practitioners in a form that was easily and freely accessible.
Q: What drew you to the idea of the Safety Checklists? What do you think makes this particular tool interesting and important to study?
A: One thing that drew our attention was the speed with which the tool was taken up in hospitals around the world; in some places, such as the UK, its use fairly quickly became mandatory following on the 2009 pilot study of the impact of the checklist in eight hospitals.
I had just finished working on a study led by my colleague Mary Dixon-Woods, which sought to develop an ex post theory of the Michigan Intensive Care Unit (ICU) project that attracted international attention by successfully reducing rates of central venous catheter bloodstream infections. What this paper showed was that although a checklist was an essential element of the intervention to reduce central line infections, it was only one component in a comprehensive, complex intervention that operated also at the socio-cultural and organisational levels. It seemed likely that making surgical checklists work would similarly require this kind of ‘adaptive’ work.
Widespread uptake of the surgical safety checklist also provided an excellent opportunity for comparative study of implementing the same tool in diverse contexts. This kind of comparative study allows us not only to describe and understand what happened in a specific context, but also offers the opportunity to theorise more broadly about how checklists can be made to work.
Q: The report is a very useful, focused, and practical tool for those trying to implement the safety checklist. Do you think there are any wider implications of the study for other checklists, or for patient safety tools other than checklists?
A: Yes. Although check lists have not been adopted as widely within healthcare as in some other industries (e.g. aviation), a lot has recently been written about their potential to be valuable in healthcare – so it’s important to understand what goes into getting people to use checklists and making them work.
Certainly the fundamental point in the report – the need to create ‘enabling’ contexts that support meaningful checklist use – applies to most, if not all, patient safety tools. Getting people to adopt a new practice or use a new tool is not just a question of providing them with the tool and the information about how to use it. As much attention needs to be given to ensuring that the context into which it’s being introduced actually supports and enables its use – whether that be ensuring that the necessary equipment is reliably available, or ensuring that frontline staff charged with using the tool have the necessary leadership and institutional support.
Q: A lot of the recommendations from the report are focused on team dynamics and local attitudes towards/support for the checklist. Was this surprising or something you anticipated?
A: I don’t think the importance of team dynamics and local attitudes was a surprise. I think most people now recognise that getting people to use a checklist, for example, is as much a social intervention as a clinical one. Moreover the surgical safety checklist was explicitly designed to tackle some of the unhelpful surgical team dynamics that can threaten patient safety: purposeful inclusion of ‘non-technical’ items on the checklist (e.g. team introductions) was intended to promote good teamwork and communication. What was perhaps different from some other studies of checklist implementation was that our study clearly showed that changes in teamwork practices do not automatically follow checklist introduction, even where non-technical items are included; rather, the way the checklist was used tended to reflect existing team dynamics. This underscores the need to recognise the social dimension of such interventions, and to think carefully about how to enhance team dynamics and relationships in order to enable teams to use the checklist effectively.
Q: Some of your current projects also examine the implementation of a particular intervention in different contexts – is there something that draws you to this sort of comparison?
A: I think there are a couple of things (at least!) that draw me to this sort of comparison. One is the potential for generating theoretical insights from comparative case studies – moving beyond description of a single case to theorising about the processes that underpin healthcare improvement, or undermine it. In the case of patient safety specifically, we were also interested in looking at both low- and high-income countries because the majority of research about patient safety, the driving concepts and the dominant tools in the field, have been developed in the context of high-income country healthcare systems. It is not (yet) clear to what extent these conceptualisations align with the views and experiences of staff working in diverse socio-economic contexts.
In this regard, I think one of the most striking findings from our study has been not the differences but the similarities between diverse contexts: what distinguishes these different contexts is not the nature of the hazards that threaten patient safety or undermine improvement efforts, but the scale of some of these challenges. For example, a fairly universal finding in studies of the checklist from around the world is that there is often some resistance to use of the checklist, especially amongst surgeons, and that hierarchical dynamics between doctors and other clinicians mean that resistance from surgeons is particularly problematic. The differences tend to lie in the scale and steepness of these authority gradients and hence in the impact of surgeon resistance on implementation efforts.
Q: Context seems to be central to a lot of your work – would you say it’s often neglected in consideration of healthcare interventions? Are there some aspects of context that are more important than others, or would you say that you can’t separate any one element from the whole picture?
A: I think perhaps historically the importance of context in understanding why interventions produce the outcomes that they do was neglected, but I think now the centrality of context is pretty well recognised. Where I think there is still a way to go is in how we conceptualise and study the interrelationship between context and intervention. My own view (reflecting no doubt my disciplinary background as a social psychologist) is that interventions and context are interdependent – each shapes and constrains the other – and so they are perhaps not quite as discrete as discussion about the impact of context ‘on’ interventions implies. So, for example, if an intervention appears to work in a given context (the desired outcomes are achieved), what underpins this change may be as much to do with certain supportive features of that context as any particular component of the intervention. Similarly, while it might be practically or analytically useful to divide up ‘context’ into different aspects or dimensions, I think we should be wary of being too atomistic in how we conceptualise ‘context’. We need to recognise that interventions and contexts are open systems, in which on-going interactions continuously shape and re-shape elements of the system, allowing novelty to emerge and undermining the predictability implied by the kind of linear cause-effect thinking I described at the beginning of the interview.