Setting the scene for the Ofsted Annual Report – and FE improvement

FacebooktwitterlinkedinmailFacebooktwitterlinkedinmail

Colin Forrest, Carolyn Medlin and Mike Cooper consider the clues in Ofsted Annual Reports, past and present, as to what impact the Inspectorate has had on improvement in the sector and how well it performs according to its own criteria…

It’s nearly 10 years since Ofsted under David Bell published Improvement through inspection – ‘a sincere attempt at externally tested self-evaluation’ of its impact on the quality of the FE and Skills sector.

As Ofsted re-engages directly with improvement activities, is a similar review now overdue? Each year, the Chief Inspector’s Annual Report gives a picture of the ‘state of the nation’ – but gives few clues to how well the inspectorate impacts on improvement.

Clearly, the findings of last year’s report were misrepresented, especially when applied to the whole FE and Skills sector. A more conciliatory approach from Ofsted has been signposted for several months, however. Its Director of FE and Skills, Matthew Coffey, wrote in the summer:

I would like to commend the hard work of so many further education and skills leaders, teachers and trainers and other staff in helping to raise standards. I have been impressed by the determined and positive response I have seen to last year’s Chief Inspector’s Annual Report. I look forward to seeing the positive impact this will have for learners.

Today, the FE and Skills sector looks forward to the Annual Report for 2012–13. Are we seeing the return of a genuine ‘shared ownership’ of improvement? If so, what are the implications for providers and Ofsted?

There are clues, but the answers so far are neither very clear nor full. The list of Ofsted’s priorities for 2013–14 includes: ‘increase the impact of inspection’ and ‘increase the impact and scope of improvement work’. But there is no indication of how these impacts are to be measured, evaluated and shared.

If Ofsted argues that their direct and positive impact comes through how they measure and evaluate improvement, as demonstrated in their processes, grading and reporting, parts of the sector may well be unconvinced. Guidance on ‘Support and Challenge’ was updated in July – but it is still hard to see how these elements are balanced to support improvement. Indeed, it appears that little consideration has been given to what the common barriers to improvement are, how they are best overcome and how improvement can be sustained.

Providers attend Ofsted improvement events ‘by invitation’, selected by narrow criteria and with such occasions often over-subscribed. These events do have the potential to enhance improvement; but this cannot be a given, since engagement is only a first step in making changes ‘back at the ranch’. At the very least, HMI input on such occasions must be inspiring, challenging and imaginative if it is to exemplify the very characteristics that Ofsted looks for in inspections.

Further difficulties exist. It is difficult to see how improvement is engendered through a summative, external process even when close links are made between providers and individual HMIs. Certainly, providers report that these new relationships are valued – but the level of resources available from Ofsted is limited.

Pressure is increasing from other directions, too. The recent Skills Commission report Move to Improve recommends that Ofsted quickly publish an account of the impact they have had in the first year of ‘Support and Challenge’. That account needs to demonstrate such impact convincingly, establishing strong causality rather than coincidence, and using data in exactly the reliable and robust ways it demands of providers – rather than through generalised assertions.

We offer an example of recent data to be considered. Inspections since this September (as of 2 December 2013) show that 27% of providers improved by a grade since their previous inspection (where that is applicable). Yet 29% dropped at least one grade, and 34% were unchanged. Is this adequate improvement, and of Ofsted’s impact on that? If not – and we doubt both – then how does Ofsted propose to improve its own influence on these patterns, from here on?

The Inspection Handbook, with its rationale for how improvement is promoted by inspection, shows a narrow perspective on that concept. What does improvement look like, in all its dimensions? Taking risks (or, to use a near-synonym much employed these days, ‘innovation’) and making mistakes is an important part of improvement. But what scope is there for that, when providers are under a variety of pressures to come up with quick answers and sustainable gains, alongside decreasing resources?

There are other difficult issues that are neither easily dismissed, nor easily answered. For instance, when there is general concern about consistency across all inspectors, and a need rigorously to interrogate improvement plans and actions, what risks are there when embarking on such a journey if largely steered by a single HMI? What is the purpose of good practice case-studies, when sometimes the examples are so general and vague that they’re not easily understood or implemented and/or they repeat fairly obvious exhortations to ‘do better’?

It’s also worth saying that although inspectors are trained in the art, or science, of inspecting and evaluating, is that in itself adequate and sufficient? How good are they as change agents? That link is by no means guaranteed.

And equally – when the emphasis is on sector self-improvement, in the absence of a specific improvement body such as the Learning and Skills Improvement Service (LSIS) – just how good are the successful organisations identified by Ofsted at fostering culture change in others, and facilitating the transfer of good practice?

In fact, FE ‘self-improvement’ has a long history; and a chequered one. Providers, however outstanding, are increasingly hard-pressed. Helping others requires considerable co-operation with the newly-refocused inspectorate. Is Ofsted as good at that as it needs to be – and the sector needs it to be?

Further issues exist with improvement and inspection in the reductionist nature of the outputs of inspection. Those occasions capture huge rafts of qualitative and quantitative information. Evidence forms contain a rich narrative on teaching, learning and assessment. The vast bulk of this information is held only within the inspectorate, and remains largely inaccessible for improvement purposes.

We offer two final and significant points for this discussion. What isn’t in the Chief Inspector’s report is the fact that some providers saw significant improvements in grades when they appealed against inspection findings. This happened even though the inspectorate usually rushes to publish, even while a complaint is investigated. Also, evaluations of inspection are not in the public domain.

So, how can the inspectorate claim full co-ownership of the improvement agenda, when their own approaches appear to fall short – not only of David Bell’s expectations a decade ago, but also of the expectations they have of providers in the same territory of open, rigorous, data-focussed self-assessment and improvement planning?

This article originally appeared in FE News of 11 December 2013. Minor amends were made on 17 December 2013.

twittertwitter  The Policy Consortium on Twitter

Leave a Reply