1.1 GETTING STARTED 

There are a number of reasons why a new review may be considered. Commissioned calls for evidence synthesis are usually on topics where a gap in knowledge has been identified, prioritised and a question posed. Alternatively the idea for a review may be investigator led, with a topic identified from an area of practice or research interest; such approaches may or may not be funded. Whatever the motivation for undertaking a review the preparation and conduct should be rigorous.

1.1.1 Is a review required?

Before undertaking a systematic review it is necessary to check whether there are already existing or ongoing reviews, and whether a new review is justified. This process should begin by searching the Database of Abstracts of Reviews of Effects (DARE),1 and the Cochrane Database of Systematic Reviews (CDSR).2 DARE contains critical appraisals of systematic reviews of the effects of health interventions. CDSR contains the full text of regularly updated systematic reviews of the effects of health care interventions carried out by the Cochrane Collaboration. Other sites to consider searching include, the National Institute for Health and Clinical Excellence (NICE) and the NIHR Health Technology Assessment (NIHR HTA) programme websites. The Campbell Collaboration website3 contains the Campbell Library of Systematic Reviews giving full details of completed and ongoing systematic reviews in education, crime and justice, and social welfare; and the Evidence for Policy and Practice Information (EPPI) Centre,4 whose review fields include education, health promotion, social care and welfare, and public health, has a database of systematic and non systematic reviews of public health interventions (DoPHER). It may also be worth looking at sites such as the National Guidelines Clearinghouse (NGC)5 or the Scottish Intercollegiate Guidelines Network (SIGN),6 as many guidelines are based on systematic review evidence. Searching the previous year of MEDLINE or other appropriate bibliographic databases may be helpful in identifying recently published reviews.

If an existing review is identified which addresses the question of interest, then the review should be assessed to determine whether it is of sufficient quality to guide policy and practice. In general, a good review should focus on a well-defined question and use appropriate methods. A comprehensive search should have been carried out, clear and appropriate criteria used to select or reject studies, and the process of assessing study quality, extracting and synthesising data should have been unbiased, reproducible and transparent. If these processes are not well-documented, confidence in results and inferences is weakened. The review should report the results of all included studies clearly, highlighting any similarities or differences between studies, and exploring the reasons for any variations.

Critical appraisal can be undertaken with the aid of a checklist7,8,9,10 such as the example outlined in Box 1.1. Such checklists focus on identifying flaws in reviews that might bias the results.8 Quality assessment is important because the effectiveness of interventions may be masked or exaggerated by reviews that are not rigorously conducted. Structured abstracts included in the DARE database1 provide worked examples of how a checklist can be used to appraise and summarise reviews.

If a high quality review is located, but was completed some time ago, then an update of the review may be justified. Current relevance will need to be assessed and is particularly important in fields where the research is rapidly evolving. Where appropriate, collaboration with the original research team may assist in the update process by providing access to the data they used. However, little research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain.11 If a review is of adequate quality and still relevant, there may be no need to undertake another systematic review.

Where a new systematic review or an update is required, the next step is to establish a review team and possibly an advisory group, to develop the review protocol.

Box 1.1: Critically appraising review articles

  • Was the review question clearly defined in terms of population, interventions, comparators, outcomes and study designs (PICOS)?

  • Was the search strategy adequate and appropriate? Were there any restrictions on language, publication status or publication date?

  • Were preventative steps taken to minimize bias and errors in the study selection process?

  • Were appropriate criteria used to assess the quality of the primary studies, and were preventative steps taken to minimize bias and errors in the quality assessment process?

  • Were preventative steps taken to minimize bias and errors in the data extraction process?

  • Were adequate details presented for each of the primary studies?

  • Were appropriate methods used for data synthesis? Were differences between studies assessed? Were the studies pooled, and if so was it appropriate and meaningful to do so?

  • Do the authors’ conclusions accurately reflect the evidence that was reviewed?
     

1.1.2 The review team

The review team will manage and conduct the review and should have a range of skills. Ideally these should include expertise in systematic review methods, information retrieval, the relevant clinical/topic area, statistics, health economics and/or qualitative research methods where appropriate. It is good practice to have a minimum of two researchers involved so that measures to minimize bias and error can be implemented at all stages of the review. Any conflicts of interest should be explicitly noted early in the process, and steps taken to ensure that these do not impact on the review process.

1.1.3 The advisory group

In addition to the team who will undertake the review there may be a number of individuals or groups who are consulted at various stages, including for example health care professionals, patient representatives, service users and experts in research methods. Some funding bodies require the establishment of an advisory group who will comment on the protocol and final report and provide input to ensure that the review has practical relevance to likely end users. Even if this is not the case, and even where the review team is knowledgeable about the area, it is still valuable to have an advisory group whose members can be consulted at key stages.

Engaging with stakeholders who are likely to be involved in implementing the recommendations of the review can help to ensure that the review is relevant to their needs. The particular form of user involvement will be determined by the purpose of the consultation. For example, when considering relevant outcomes for the review, users may suggest particular aspects of quality of life which it would be appropriate to assess. An example of a review which incorporated the views of users to considerable effect is one evaluating interventions to promote smoking cessation in pregnancy, which included outcomes more relevant to users as a result of their involvement.12 However, consultation is time consuming, and needs to be taken into account in the project timetable. Where reviews have strict time constraints, wide consultation may not be possible.

At an early stage, members of the advisory group should discuss the audiences for whom the review findings are likely to be relevant, helping to start the planning of a dissemination strategy from the beginning of the project.

The review team may also wish to seek more informal advice from other clinical or methodological experts who are not members of the advisory group. Likewise, where an advisory group has not been established, the review team may still seek advice from relevant sources.

Summary: Getting started

  • Whatever the motivation for undertaking a review the preparation and conduct should be rigorous.

  • A search of resources such as the DARE database should be undertaken to check for existing or ongoing reviews, to ensure a new review is justified.

  • A review team should be established to manage and conduct the review. The membership should provide a range of skills, including expertise in systematic review methods, information retrieval, the relevant clinical/topic area, statistics, health economics and/or qualitative research methods where appropriate.

  • Formation of an advisory group including, for example, healthcare professionals, patient representatives, service users and experts in research methods may be a requirement of some funding bodies. In any event, it may be valuable to have an advisory group, whose members can be consulted at key stages.

  • The review team may wish to seek advice from a variety of clinical or methodological experts, whether or not an advisory group is convened.