Accessibility statement

Past CHE Economic Evaluation Seminars 2005

22 December 2005 

Title: How cost-effective is screening for abdominal aortic aneurysms? A long-term perspective based on the MASS trial 
Presenter: Dr Lois Kim

Abstract: Screening for abdominal aortic aneurysms (AAA) has been investigated in a number of randomised trials that have consistently reported an AAA-related mortality benefit in the group invited to screening. Reliable estimates of long-term cost-effectiveness are now needed to inform policy decisions for AAA screening programmes. A Markov decision model for screening is described and extrapolated to 30 years. The strategy modelled involves a one-off scan at age 65, with annual and three-monthly follow-up scans for small and medium aneurysms respectively. Referral for elective surgery occurs at an aortic diameter of 5.5cm; without this elective intervention, aneurysms may rupture, requiring emergency surgery to prevent death. Model parameters are estimated from patient-level data from the UK Multi-centre Aneurysm Screening Study. Model structure is validated on this trial’s data, and input parameter uncertainty is addressed by probabilistic sensitivity analysis.

24 November 2005

Title: Multiple appraisal of drugs in the UK health care system: Is this a sensible use of evaluation resources?
Presenter: John Hutton (The MEDTAP Institute at UBC, London)

Abstract:   Background: There are three separate bodies using technology assessment (including economic evaluation) to provide guidance to the NHS on the use of drugs.  The National Institute for Health and Clinical Excellence (NICE); the Scottish Medicines Consortium (SMC) and the All Wales Medicines Strategy Group (AWMSG) have different remits and different geographical jurisdictions.  These differences offer scope for complementary working, but also raise the possibility of unnecessary duplication of effort.  The paper will review the working of the three institutions and develop methods of assessing whether they represent an efficient use of resources.

Approach: using a template characterizing the stages of an appraisal and the features of an HTA system (Hutton et al, 2006) the working processes of the three different bodies are compared.  The main source of information is the documentation produced by these bodies on their rules, processes and decisions.  Commentaries and case studies of particular decisions from the health economics and health policy literature are also used. Parallels are drawn with methods of regulating market access for drugs in other European countries.

Findings: NICE procedures are the most thorough and well documented but because of their resource intensive nature, take longer to apply. Methodological expectations for company submissions are similar between the organizations, but the degree of independent review and input varies with the annual number of technologies assessed.  Speed of decision-making is traded against the degree of consultation and opportunity for manufacturers and sponsors of technologies to comment on draft decisions. Using documented sources the aspect of appraisal systems on which there is least information is the implementation, and impact of recommendations. In the majority of cases, SMC and NICE decisions on the same drug have been consistent.  SMC most resembled equivalent systems in other European countries, designed to evaluate all new drugs at launch.  AWMSG was most overtly concerned with cost implications of prescribing.

Discussion: Most differences between the organizations could be explained by their differing objectives, scope of activities and accountability.  SMC is most concerned with timely advice to Health Boards on all drugs.  NICE has a wider remit including production of clinical guidelines and the technology appraisal programme is designed to fit in with these other activities. AWMSG is concerned with high cost products.  Separate recommendations for the NHS in each of the constituent countries of the UK may be justifiable because of geographical and demographic differences.  The need for a separate process to determine those recommendations remains to be established.  Properly coordinated, the three approaches could provide the UK NHS with a flexible and effective system of technology assessment.  Without such coordination, a waste of valuable evaluation resources within the public and private sectors is likely.

27 October 2005 -Rachel Elliott (Harkness Fellow, University of Manchester and Harvard School of Public Health)

29 September 2005

Title: Extrapolation of survival curves using relative survival models
Presenter: Dr Paul Lambert (Lecturer in Medical Statistics, Centre for Biostatistics & Genetic Epidemiology, Department of Health Sciences, University of Leicester)

Abstract:  Relative survival methods are used extensively in population-based cancer studies where they are used to separate mortality associated with the disease of interest from mortality due to other causes.  Relative survival models can be used to estimate the excess mortality rate associated with a disease by incorporating expected survival (obtained from routine national data). For many diseases, there is variability in the excess mortality rate in the first few years, but is then seen to stabilise at a constant value (l). In some diseases statistical cure is reached (l=0), whilst in more chronic diseases there continues to be excess mortality associated with the disease (l>0). Here we show that by making plausible assumptions about the excess mortality rate beyond the length of the study it is possible to extrapolate the survival curve.

If one is willing to assume that the excess mortality rate (or equivalently the interval specific relative survival ratio) will remain constant beyond the length of the study then the all-cause survival curve can be extrapolated by combining the (known) expected mortality rate with the excess mortality rate. There are a number of options about the choice of l: (i) it can be assumed to be fixed (either zero or some positive constant), (ii) it can be estimated (with uncertainty) from the available data if there is sufficient follow-up, or (iii) previous evidence can be used by using appropriate prior distributions in a Bayesian Analysis.

I will show how the method generally works well in population based cancer studies where it is of interest to extrapolate the survival curve in order to estimate the loss in the expectation of life and the proportion of expected life lost.  I will also discuss how the methods can also be applied in trialbased cost-effectiveness analyses where health care policymakers are interested in making decisions from a long-term perspective rather than limited to the length of a trial. Parametric survival models have been used in cost-effectiveness analyses to extrapolate survival curves with results being very sensitive to the choice of distribution. The use of relative survival models to extrapolate survival curves beyond the length of the available data may lead to improved estimates over existing methods.

28 July 2005 - Susan Griffin (CHE, York)

23 June 2005 - (CHE, York)

26 May 2005 - Christian Asseburg (CHE, York)

10 May 2005

Title: Handling uncertainty in modelling
Presenter: Marian Scott, University of Glasgow

Abstract: There is a rapidly growing oldbody of literature (not all statistical) on the assessment and quantification of uncertainty in modelling. Sensitivity analysis (SA) is a general methodology used to evaluate the sensitivity of model output to changes in model input, i.e. the rate of change of the response function relative to the input parameters. There are a number of different methods for carrying out a sensitivity analysis, ranging from simple one-at-a-time methods to global, multivariate methods. There are also strong links to classical design of experiments. SA is closely linked to Uncertainty Analysis (UA), another computational approach, where the objective is to evaluate the uncertainty on the model response as a result of uncertainties on the model input parameters (parametric uncertainty) and on the model form itself (structural uncertainty). In this talk, SA and UA tools, their use and the challenges presented in their application to some complex models will be discussed.

28 April 2005

Title: Evidence and values, hopes and costs. A framework to guide coverage decisions for unproven therapies
Presenter: Steven D. Pearson, Atlantic Fellow in Public Policy 2004-2005 (National Institute for Health and Clinical Excellence)

Abstract: Karl Claxton and colleagues published one month ago an article titled "When is evidence sufficient?" The article discussed ways to consider available evidence in medical and coverage decision-making, and suggested ways to determine when more evidence would be useful. I will pick up these themes in my talk. I will present a perspective on how questions of the sufficiency of evidence are addressed by US health plans in their coverage decisions. In this context I will discuss the elusive dividing line between 'medically necessary' and 'experimental' therapies. I will continue from that background to present a new taxonomy with which to dissect and describe the key characteristics of unproven therapies. This taxonomy seeks to move beyond simplistic dichotomies by defining the specific characteristics that are best able to parse the interwoven ethical and practical issues inherent in difficult coverage decisions. Based on this analysis, I will present for your critique a set of recommendations for the standard of evidence required to justify coverage for various types of new therapies. The overarching purpose of my talk will be to tweak Karl's nose (just kidding) and to help the diverse group of decision-makers in public and private insurance programs make more consistent, valid, and justifiable coverage decisions, thus contributing to a new public confidence in their clinical legitimacy and ethical sensitivity.

Who to contact

For more information on these seminars, contact: