Why Healthcare Research ?

Why Healthcare Research ?

The rise of interest in research methods

If you search on Amazon.co.uk for books on research methods in health, you will find a little over 1600 titles. There are books on statistics, books on evaluation, on qualitative research, on mixed methods research, on survey questionnaires, books for clinicians and books aimed specifically at students doingwork for degrees. There are books for nurses, psychologists, social workers and sports scientists, as well as for professional groups that few of us have heard of. Clearly someone – the authors and editors of these books as well as the publishers – believes that their target audiences will be interested in research enough to buy these books and, as an optional extra, read them.

Similarly, a few years ago colleagues and I studied published research which had appeared in nursing journals. The Research Outputs Database is a database of published biomedical research that can be searched by computer, developed by the Wellcome Trust. Between 1988 and 1995 the number of nursing papers appearing here rose from 0.55 to 1.29% of the database. Though small, this increase made it the fastest growing biomedical sub-field during this period (Rafferty, Traynor and Lewison, 2000). At a similar time, the United Kingdom (UK) Royal College of Nursing commissioned a review of UK nursing journals which published any research material. At that time, the study identified 27 titles published by 18 publishers. Nearly half (15 journals) of these titles had started publication within the past five years and, of these, one third had existed for no more than two years (Holdich Stodulski, 1995). Even in the mid 1990s, researchwas a growing enterprise amongst this section of health professionals. Today the international list of high quality research journals, ISIWeb of Knowledge, contains 72 journals in its nursing subject list (http://scientific.thomsonreuters.com). Nursing is the only subject area of the Science Citation Index Expanded, on which this part of theWeb of Knowledge is based, which maps on to a single professional group, so it gives a broad indication of the extent of research here.

The rise of research in the national health service: Evidence based medicine and evidence based practice

How can we account for this growing interest? Nurses and other health professionals were not taking up research for purely internal reasons. A new policy imperative within the UK National Health Service (NHS) set the context for this. Like so much United Kingdom health policy that has an impact on nursing, the chief focus of research policy from the Department of Health in the early 1990s was on problems within medicine. A House of Lords Select Committee on priorities in medical research had identified the United Kingdom’s falling contribution to international research and reported that ‘the NHS was run with little awareness of the needs of research or what it had to offer (House of Lords Select Committee on Science and Technology, 1988). The eventual outcome of the committee’s work was the National Health Service’s first research strategy, launched in 1991, with identified funding streams in priority areas for research (Department of Health, 1991). Research was to be centred around the needs of the service rather than the interests of the research community.

At its outset there were no separate funding streams for different professions, so the less politically powerful professions with less of a research background had to compete in an interdisciplinary field for research funding. Many did and some were successful.Within nursing a debate developed about the advantages and disadvantages of such an arrangement. Some argued that the profession needed to show that it had the research maturity to bid for funding on the same terms as doctors and others, while others pointed out that nursing and other health professions needed special funding to develop research capacity and infrastructure, and that the amount of research funding they presently received was disproportionately small in relation to the number of nurses delivering care. The argument was that more and better nursing research could have an enormous impact on patient outcomes and experience (Rafferty, Bond and Traynor, 2000). These arguments gained some sway and the late 1990s and early 2000s saw an unprecedented amount of research money available for these professions, from the Higher Education Funding Council for England (HEFCE) via research capacity funding jointly with the Department of Health, and major funding sources from large charities such as the Health Foundation. These were heydays, however, because not many years later such sources have diminished.

The benefit, however, was that it raised the game for research in these underdeveloped professions. (To be accurate, some of the health professions had well-established research traditions). The model of ‘mainstream’ scientific research, reflected in the new NHS funding streams, was one of teams of well-established experts working with large grants to solve clinical or organizational problems, possibly hiring junior research staff to collect data. The norm in nursing research, at least up to that point, was of single researchers working on areas of personal interest, often around educational issues, often for research qualifications, usually without any funding and failing to build up a body of knowledge for the profession as a whole in any coordinated way. The characteristics of the research that did appear in nursing journals reflected this model (Traynor, Rafferty and Lewison, 2001) and were atypical of biomedical research as a whole.

The final piece of background needed to understand why nursing and other health professions have become so interested in research is the emergence, along with the Research&Development (R&D) strategy in the early 1990s, of the Evidence Based Medicine (EBM) movement, later relabelled Evidence Based Practice (EBP) to reflect its ambition to include all healthcare practice. This movement made a strong challenge to traditional hierarchy and authority based practice in medicine, arguing that senior medical figures were likely to be not up to date with the latest scientific research that could be relevant for their clinical practice (Sackett, 2000). The other arguments made were that even hard working clinicians cannot hope to keep an intelligent and critical grasp on all the research that is currently published in their areas. The solutions were the development of new databases of what was considered the most rigorous and reliable research that clinicians could turn to in order to answer clinical questions, and the emergence of a great many courses, worldwide, to teach clinicians and other researchers how to make an appraisal of the trustworthiness of published research: critical appraisal (Chapter 5). Reading research was no longer sufficient. Clinicians needed to know whether it could be trusted if theywanted to base their decisions on it, or push for some change to established practice.

Understanding research in healthcare

So, to address the question, why do healthcare students and practitioners need to have an understanding of research? The first answer is because it is part of the language of healthcare today, so the practitioner who blinks and asks what EBP stands for is likely to look stupid and lack credibility. Whether all this talk of research actually makes a great deal of difference to the delivery of healthcare and the experience of patients – more than, say, the quality of managers or the amount of work to be got through – is another question. But not understanding what is being said around you is distressing and you are excluded from nice jokes about homoscedasticity, multicollinearity, or saturation. In fact, one strong impulse for nursing as a profession to take up EBP was around the desire for credibility and status. Also, as soon as talk of evidence based activity was out of the bag, it became a currency that was not likely to go away for a while. In the late 1980s and early 1990s in response to a rising managerialism and cost-containment, nurses and others were busy in efforts aimed at demonstrating their ‘value for money’. Later in the 1990s, they needed to show that they were acting from a reliable scientific basis. Some looked to ‘evidence’ to demonstrate the value of nursing (Kitson, 1997). The danger of evidence based practice, which many clinicians were acutely aware of, was that it rendered professional decision making accessible to external evaluation. Now managers and policy makers, by collecting information on patient outcomes and having access to research based ‘best treatments’, even the profession’s own protocols, could make penetrating judgements about effectiveness and attempt to enforce standardized treatments (Timmermans and Berg, 2003). So, the first answer is a pragmatic one. Research and talk of research is expected to be part of any credible clinical professional’s repertoire.

The second answer to this question is voiced by many of the old-time researchers in nursing and is to do with a kind of intellectual restlessness. I remember the late Lisbeth Hockey (who died in 2004, aged 85), one of the United Kingdom’s nursing research pioneers, telling tales of 1940s ward sisters and staff nurses exasperated by her constant demand for explanations about why things on the ward were done one way rather than another. In such stories, the justification for particular procedures given by the tired staff centres around custom rather than rationale. This fundamental confrontation between a modernizing, youthful, questioning critique and an unintelligent, status based conservatism is often staged by the proponents of research and research-mindedness. In nursing, a myth has developed that nursing practice is largely based on ‘tradition, myths and rituals’ (Walsh and Ford, 1989). Although traditions and rituals perhaps get unjustifiably harsh treatment, the argument is a strong one that the patients of health services would be better off receiving care and treatment from professionals who are prepared to reflect on how they do things. Being more fully conscious at work and having the nerve to ask whether things might be different may lead to an interest in research. A colleague recently told me how, as a ward sister in the early 1980s, she wrote up a kind of protocol for how each consultant on her gastroenterology ward wished their patients to be prepared for and cared for after the same surgical procedure. Predictably, they differed widely and, for a certain kind of mind, the mere act of pinning up alongside each other these lists of preferences, each with different implications for patient experience, NHS costs and presumably outcomes, would set out a research agenda.

This leads to the third reason for developing an understanding of research. The advocates of EBP point to links with revolutionary Enlightenment France, where clinicians like Pierre Louis, according to David Sackett, one of EBP’s leading figures, ‘rejected the pronouncements of authorities and sought the truth in systematic observation of patients’ (Sackett, 2000, p. 2). Later he talks about the predicament of medical students and junior doctors who have, he says, to carry out the orders of their consultants, unaware of whether:

. . .the advice received from the experts is authoritative (evidence-based. . .) or merely authoritarian (opinion-based, resulting from pride and prejudice). (Sackett, 2000, p. 5)

So EBP promised a kind of democratizing context where senior staff could be unsettled a little by staff who are conscientious scientists, like Pierre Louis. I think there is some evidence for this happening. In focus groups I have run with nurses over the years, I have heard repeated stories of even junior nurses claiming to have challenged doctors over particular practices with some piece of research evidence (Traynor, Rafferty and Solano, 2003). In these stories, the evidence seems to have won the day and in the process has given these nurses a new professional confidence.

Appraising the quality of research

One of the cornerstones of the Evidence Based movement has been its insistence that practitioners learn to take a critical stance toward published research, or if they cannot themselves undertake this so-called critical appraisal of research evidence, that they avail themselves of the increasing number of systematic reviews of research available in the Cochrane Collaboration (http://www.cochrane.org/index.htm) and other places (such as nursing’s Joanna Briggs Institute, http://www.joannabriggs.edu.au/pubs/systematic reviews.php). These reviews set out explicit judgements about the technical quality of the research included, or indeed excluded, so that busy clinicians can take the overall conclusions of the review with some confidence. Those involved in EBP tend not to be champions of ambiguity and subjectivity, and it is no surprise that the movement has produced a number of checklists and procedures for making judgements about the quality of an individual piece of research. Some NHS trusts run journal clubs where clinicians who are motivated to do so can be led through the questions involved in critical appraisal of papers relevant to their area of practice. Such appraisal tends to involve questions about research design; for example how randomization of participants was achieved, how completely this is described, the type of analysis undertaken and the conclusions drawn. It is possible to see such formulaic approaches to dealing with research as simplistic and authoritarian. Indeed such approaches do not encourage discussion about how the checklists themselves came into being, so can be seen as the opposite to the democratizing and empowering effect promised by EBP. However, those new to reading research, including students, can also be disempowered when instructors hand out published papers and ask them to ‘critique’ them without giving them a possible framework for doing this. Some students understand this as an invitation simply to criticize the work. Another problem with the democratizing potential of EBP is that, in spite of the proliferation of checklists and short courses offered to clinicians, some groups have used EBP, not necessarily consciously, as a context in which to enhance their own professional group’s standing and influence. Clinical epidemiology as a previously minor medical sub-discipline (compared, for example, to surgery) stood to gain the most. Epidemiologists write:

The main source of new knowledge for doctors in the era of evidence-based medicine (EBM) is medical research results published in professional journals. . .Nevertheless, there are numerous examples of medical studies with serious flaws in design, analysis and interpretation. It is possible to be seriously misled by taking the methodological competence of authors for granted. To critically appraise published articles, doctors should have a basic understanding of the methods of epidemiology and biostatistics. These skills are particularly needed for conducting, analyzing, and reporting results of medical research. Several studies have found that doctors are often not fully competent in basic research methods. (Novack et al., 2006)

There is probably truth to this, and the practitioner who wants to consider better ways of doing what they do, and is in a position to make or recommend changes, would do well to avail themselves of the judgements of expert panels of reviewers or bring potentially useful articles to journal clubs. Some degree of familiarity with the language and concepts of research and an understanding of research design is essential, even to understand what questions to ask and this book will help clinicians who want this grounding to understand what is going on in research papers.

Why it’s not so easy

Undergraduate medical and nursing curricula are crowded and only introduce their students to research at a basic level. The level of input is not necessarily sufficient for newly qualified clinicians to feel absolutely confident in the face of confidence intervals, for example (Chapter 7). Sometimes, for a similar reason of pressure and resource, undergraduate and even postgraduate research teaching and supervision is done by educators with little personal experience of research and only a partial grasp of research principles and methods. Some of those without a secure grasp of the topic can give vague or only very general advice to students and this, too, can result in perpetuating a kind of mystique around research for those trying to get to grips with it. Clinicians do not always have high levels of knowledge of and confidence around research, and this is in part a result of the structures of education in the healthcare professions.

Those who are eager to promote research in health services delivery present a picture of a process that is pleasing in its simplicity and rationality: Define and articulate a valid and important clinical question based on an uncertainty about practice, locate the evidence needed to answer it, appraise that evidence, implement it – taking into account the individual patient context – and evaluate what difference it has made. Nobody pretends this is easy and implementation is probably the most uncertain and complex and written about part of the formula. Individual nurses may find the business of changing practice more difficult than, say, a medical consultant for the obvious reason that their spheres of influence tend to be smaller. However, in many NHS trusts it is panels of clinicians, often with experts available to support them, that are involved in considering how to respond to different types of evidence and how policies about practice might be changed. There are many accounts of this kind of process and the best acknowledge the complexity of the process, how different group members may weigh different types of evidence differently for example (Dopson et al., 2003; FitzGerald et al., 1999; Gabbay et al., 2003; Moreira, 2005). The point is that it would be a mistake to think that individuals, that is you, bear sole or even the main responsibility for this much talked about business of getting practice to change in response to research evidence. Of course it is sometimes the case that an individual might discover some highly relevant research article and may come to work promoting it to others with such energy that it is considered, but my guess would be that this is the exception.

What is quality in research?

The authors of the rest of this book will set out what we mean by quality in research in the different contexts in which research is done and from the different theoretical perspectives from which researchers work. Iwant only to point out here some of the very basic issues that underpin the rest. These concern asking a useful research question, letting the question determine how you answer it, telling those who read your research what they need to know and making sure that any conclusions and recommendations flow from the findings.

Most people involved in healthcare delivery have very strong feelings about some aspect of their experiences at some point in time. In a context where research is highly valued and its usefulness as a problem solving method probably overrated, many clinicians and students are persuaded to ‘do some research’ around some particular issue even when it is not research that is needed but some other intervention. A clue to when this might be the case is when such people say that by doing their research they ‘want to show that GPs don’t understand the role of health visitors or community mental health nurses’ for example. It could be that what would do more good in this situation would be a leaflet campaign or a series of information giving visits rather than sending round a questionnaire. What I mean is, there is no genuine research question here. There could be, but perhaps the clinician in this case would get more satisfaction by doing something different. A successful research question in the healthcare context has to be concerned with genuine uncertainty and, in my view, have a degree of specificity. I am open to persuasion but, in my view, research questions along the lines of ‘what are the experiences of’ some patient group or clinicians are usually inadequately defined – at least in the context of enquiry about healthcare provision. I say this because often, under intense questioning, the authors of such questions turn out to have a much more specific interest along the lines of ‘How can healthcare workers better support patients undergoing [a particular treatment]?’ So, if the research question is not right, the whole enterprise is hopeless. If it is a genuine question, then so far so good.

Once we have a good research question, the way we answer it, or the method, will almost chose itself. Questions about changes over time or about prevalence or effectiveness can generally be addressed by counting or measuring something: counting admissions of patients with a certain condition in every December since 2003 and comparing it with every June in the same years; sending a questionnaire to every GP in an area and asking if they employ a practice nurse; giving one group of your patients a particular manipulation and an advice booklet and a similar group just the advice. Questions along the lines of ‘why do our patients not attend outpatients appointments?’ or ‘why do we have such high turnover amongst our midwifery staff?’ are probably best answered by asking the relevant people some questions about this. The details of different research approaches and designs that might be adopted will be covered in the rest of this book. The point is that the wrong method will not answer the question. Some research supervisors lack breadth of knowledge of research methods and advise students and new researchers to carry out ‘semi-structured interviews’, whatever the question.

The enterprise of science is about rigour and proof, as well as entrepreneurship, ambition, project management and having the right costume (Latour, 1987). While research in the human sciences is very different from research carried out in a laboratory, a basic feature they share is to do with replicability. The readers of a research paper need to be given enough information to judge the adequacy of what has been done, and even to be able to copy what the paper says was done and see if they get the same results. Poetic economy and elegance come second to meticulous giving of detail. It is much better to describe how the text of an interview was analysed than to wave an under-explained technical term around.

Finally, and following on from the notion of research being about an attempt to reduce uncertainty, the expectation of research conclusions is that they were not self-evident to everyone before the piece of research they are attached to was done.

I have just pointed out what is probably obvious to many readers. I will now leave it to the capable pens of the other writers of this book to go into more detail about the deep aspects of quality.

Reflection Points

What is the significance/importance of implementing evidence based practice in healthcare? In your clinical practice, what innovations/developments have been implemented which have changed practice in your area? What were the effects?

References

  • Department of Health (1991) Research for Health; A Research and Development Strategy for the NHS, HMSO, London.
  • Dopson, S., Locock, L., Gabbay, J. et al. (2003) Evidence-based medicine and the implementation gap. Health: An Interdisciplinary Journal for the Social Study of Health, Illness and Medicine, 7 (3), 311–330.
  • FitzGerald, L., Ferlie, E., Wood M. and Hawkins, C. (1999) Evidence into practice? An exploratory analysis of the Interpretation of evidence, in Organisational Behavior in Health Care (eds S. Dopson and A. Mark), Macmillan, London, pp. 189–206.
  • Gabbay, J., Le May, A., Jefferson, H. et al. (2003) A case study of knowledge management in multi-agency consumer-informed ‘communities of practice’: implications for evidence-based policy development in health and social services. Health: An Interdisciplinary Journal for the Social Study of Health, Illness and Medicine, 7 (3), 283–310.
  • Holdich Stodulski, A. (1995) RCN Study of UK Nursing Journals 1995, Royal College of Nursing, London.
  • House of Lords Select Committee on Science and Technology (1988) Priorities in Medical Research. 1st Report, HMSO, London.
  • Kitson, A. (1997) Using evidence to demonstrate the value of nursing. Nursing Standard, 11 (28), 34–39.
  • Latour, B. (1987) Science in Action: How to Follow Scientists and Engineers through Society, Harvard University Press, Cambridge, MA.
  • Moreira, T. (2005) Diversity in clinical guidelines: the role of repertoires of evaluation. Social Science and Medicine, 60 (9), 1975–1985.
  • Novack, L., Jotkowitz, A., Knyazer, B. and Novack, V. (2006) Evidence-based medicine: assessment of knowledge of basic epidemiological and research methods among medical doctors. Postgraduate Medical Journal, 82, 817–822.
  • Rafferty, A. M., Bond, S. and Traynor, M. (2000) Does nursing, midwifery and health visiting need a research council? NT Research, 5 (5), 325–335.
  • Rafferty, A. M., Traynor, M. and Lewison, G (2000) Measuring the outputs of nursing R&D: A third working paper. Centre for Policy in Nursing Research: London School of Hygiene & Tropical Medicine.
  • Sackett, D. L. (2000) Evidence-Based Medicine: How to Practice and Teach EBM, Churchill Livingstone, Edinburgh.
  • Timmermans, S. and Berg, M. (2003) The Gold Standard: The Challenge of Evidence-Based Medicine and Standardization in Health Care, Temple University Press, Philadelphia.
  • Traynor, M., Rafferty, A. M. and Lewison, G. (2001) Endogenous and exogenous research? Findings from a bibliometric study of UK nursing research. Journal of Advanced Nursing, 34 (2), 212–222.
  • Traynor, M., Rafferty, A. M. and Solano, D. (2003) Between the lines. Nursing Standard, 18 (8), 16–17.
  • Walsh, M. and Ford, P. (1989) Nursing Rituals, Research and Rational Actions, Heinemann, Oxford.

Komentar