Last updated on 28th August 2014
Common sense on improving outcomes: It only takes a little reflection to realise that, if we want to become more successful at doing something, it’s likely to help immensely if we know where we’re starting and can track if we’re improving or not. The research literature is very clear about the importance of this approach – see, for example, the major text book “Development of professional expertise: Toward measurement of expert performance and design of optimal learning environments” – but common sense already makes this pretty obvious. Imagine that we are trying to improve our skills at playing darts. Obviously we need to practise throwing darts at a board so that we can learn to place them more accurately & achieve increasingly excellent scores.
Now imagine that we’re asked to develop dart-throwing skill but we have to wear blindfolds while we’re practising and we only get rather vague reports from others as to where the darts are hitting the board. In this situation, it might take an awfully long time to improve. In fact maybe we wouldn’t actually improve at all. This is a pretty good description of our attempts to become more expert in the majority of occupations and professions. Most people believe that they are getting better & better results the longer they work at their job. Sadly research study after research study shows that this is usually an illusion. There is typically very little relationship between years spent in practice and how successful we are at achieving good outcomes. Most of us – including nearly all health professionals – are like blindfolded dart players. Is this person that I’m working with becoming better because of the help that I’m providing, or would they have got better anyway? Maybe my input has actually slowed their recovery? How do I know? Happily this confusing, blindfolded, learning situation is starting to change and this improvement is well worth supporting.
Relevance for psychotherapy: For psychotherapists, there is a very sensible and increasingly powerful research-based initiative encouraging us to track the results we achieve more carefully and check how they measure up to best outcomes in our field. This can meld the best of evidence-based practice with the fine-tuned personalization achievable through practice-based evidence. As Castonguay and colleagues write in their chapter on “Practice-orientated research” in the superb 2013 edition of the “Handbook of psychotherapy and behavior change” – “At its heart, practice-based evidence is premised on the adoption and ownership of a bona fide measurement system and its implementation as standard procedure within routine practice.” So what we need to do is monitor how effective we’re being as psychotherapists with well-established outcome measures that are also being used by many other psychotherapists working in similar fields to ourselves. In this way we can compare our results and see where we’re doing well and where we need to improve. The “dart players” who want to improve their success rates can now do so without having to wear blindfolds. The previously very hard task of assessing whether we’re getting better at what we do as therapists can become a whole lot easier.
Ways to monitor our practice: There are a number of “bona fide measurement systems” available to us including the “Clinical Outcomes in Routine Evaluation (CORE)”, the “Outcome Questionnaire-45 (OQ45)”, the “Partners for Change Outcome Management System (PCOMS)”, and the “Treatment Outcome Package (TOP)”. These assessment & tracking methods, and others in development, are still evolving. However, very encouragingly, they are already making a major impact to boosting outcomes and significantly reducing deterioration rates (see Castonguay et al, above). One way they do this is by highlighting cases where improvement is not occurring adequately – typically by charting how each client is actually responding when compared with improvement trajectories predicted from databases of large numbers of similar cases.
We have known for many years that significant improvement in the first handful of therapy sessions (two to five maybe) is a good predictor of eventual overall progress - see, for example "Early improvement during manual-guided cognitive and dynamic psychotherapies predicts 16-week remission status" and "Do early responders to psychotherapy maintain treatment gains?" This has tended to push me towards using "bona fide measurement systems" like the CORE and PCOMS (see above) so that I can track my client's progress against predicted trajectories. Reading recent emerging research on variability in response patterns in eventually successful cases however has made me a bit more cautious about this somewhat one-style-fits-all viewpoint - see this year's papers"Nomothetic and idiographic symptom change trajectories in acute-phase cognitive therapy for recurrent depression" and "Shape of change in cognitive behavioral therapy for youth anxiety: Symptom trajectory and predictors of change." This new caution makes me more ready to consider other ways of assessing the effectiveness of my therapy.
Using IAPT data to benchmark how well we're doing: A fine recent overview of success rates obtained by the UK Increasing Access to Psychological Therapies (IAPT) organization introduces a new option – see “Enhancing recovery rates: Lessons from year one of IAPT" (freely downloadable in full text) with the paper's abstract reading: "Background: The English Improving Access to Psychological Therapies (IAPT) initiative aims to make evidence-based psychological therapies for depression and anxiety disorder more widely available in the National Health Service (NHS). 32 IAPT services based on a stepped care model were established in the first year of the programme. We report on the reliable recovery rates achieved by patients treated in the services and identify predictors of recovery at patient level, service level, and as a function of compliance with National Institute of Health and Care Excellence (NICE) Treatment Guidelines. Method: Data from 19,395 patients who were clinical cases at intake, attended at least two sessions, had at least two outcomes scores and had completed their treatment during the period were analysed. Outcome was assessed with the patient health questionnaire depression scale (PHQ-9) and the anxiety scale (GAD-7). Results: Data completeness was high for a routine cohort study. Over 91% of treated patients had paired (pre-post) outcome scores. Overall, 40.3% of patients were reliably recovered at post-treatment, 63.7% showed reliable improvement and 6.6% showed reliable deterioration. Most patients received treatments that were recommended by NICE. When a treatment not recommended by NICE was provided, recovery rates were reduced. Service characteristics that predicted higher reliable recovery rates were: high average number of therapy sessions; higher step-up rates among individuals who started with low intensity treatment; larger services; and a larger proportion of experienced staff. Conclusions: Compliance with the IAPT clinical model is associated with enhanced rates of reliable recovery."
This new IAPT data allows us to use free monitoring questionnaires like the PHQ-9 and GAD-7 to track how therapy is going and - fascinatingly - to compare our overall success rates against a very large database of similar cases. It still makes very good sense to be eagle-eyed about initially slow therapeutic response, while also acknowledging that eventually successful cases don't all follow similar improvement trajectories. In the next post in this sequence - "Improving therapeutic success rates: using UK IAPT data to assess how well we're doing therapeutically" - I will look more closely at the IAPT data and how we can use it to "take off our blindfolds" and start improving our success at "hitting the dartboard" in helping our clients more consistently achieve excellent outcomes.