You need cookies enabled to use this website.
For the current REF, see the REF 2029 website | For REF 2014, see the REF 2014 website
You need cookies enabled

18 May 2022

The Stern Review provided the stimulus for the creation of IDAP: the Interdisciplinary Advisory Panel to REF2021. There was a recognition, also clear from the Metric Tide Report, that institutions and individuals had been reluctant to contribute outputs that were perceived as interdisciplinary to REF2014, in case the panels had not judged them fairly. The evidence did not support this fear that unfairness had crept into scoring last time around, but HEFCE (and subsequently Research England) wanted to set in place a system that could remove, or at least alleviate that fear, given the overall importance of interdisciplinary research, as well as work with main and sub-panels to ensure fairness in assessment was maintained.

As Chair of IDAP throughout the process, I felt a real sense of responsibility to work for the community to ensure both that any additional criteria introduced into the assessment process ‘worked’ and that the other ingredients we proposed, such as the creation of interdisciplinary advisor roles on all sub-panels, were effective and did not get in the way of overall operations.  My colleagues on IDAP were a pleasure to work with as we set out to achieve these aims during the criteria-setting phase, and then – with refreshed membership, including main panel advisors – watching over the process during the assessment phase, to see what worked and what did not.

During the assessment phase I have been involved in main panel chairs meetings, as discussions ensued about the unfolding data, including on interdisciplinary research. It has been heartening to hear the confidence with which each chair felt sub-panels were able to handle IDR and assess it appropriately, views echoed by main panel IDR leads on IDAP. These views have demonstrated that the measures put in place across the process have helped to ensure that there has been a strong focus on IDR. It was on the agenda at every MP meeting; the strengthened measures have helped support the assessment and these positive actions should have sent a clear message highlighting the value of IDR within the REF.

Nevertheless, it is important that complacency does not set in: I am sure sub-panels were indeed populated with many individuals who had great breadth of experience but truly original IDR may cross so many boundaries that it may continue to need input from outside any given unit of assessment. The confidence the community will feel in submitting IDR – if future equivalent assessment processes are run – will continue to need indications of flexibility and broad-mindedness in how IDR is treated.

Readers of this blog, and of the sub-panel reports, will know not everything worked as well as we would have liked. Notably, the IDR flag was not well understood. With the percentage of flags submitted by different institutions varying from essentially zero to near100% it is clear that people interpreted its use very differently, to the extent that it was not a useful indicator of anything. Personally, I still believe that it ought to be possible to make such a system of flagging viable, but it requires both that everyone appreciates it cannot lead to detriment and that the person inserting the flag is sufficiently close to the research to be able to gauge correctly whether or not the flag is appropriate. The distinction between cross-referral and joint assessment also seemed to cause confusion and will need to be rethought in any future process.

However I firmly believe that having clear criteria for IDR is a crucial requirement. In particular, it is important to realise ‘that the criteria [of originality and significance] do not need to be demonstrated across all of the constituent parts brought together in the work, but may be identified in one or more parts, or in their integration.’ But also that ‘All elements of the research should demonstrate appropriate academic rigour’. I have too often seen situations at grant-giving panels, where applications fail because referees seem to be believe a good piece of work requires every part of it to be cutting-edge, and failing to recognize the originality and significance can occur from the synthesis of ideas. We, as IDAP, were determined to spell out that excellent IDR can emerge in many different ways.

The pandemic will, inevitably, have impacted on the conduct of the entire REF process. From the IDR point of view, I think where it had most impact was in the inability to share thoughts on process between IDR advisors. The intention had been to have a network of these people who could compare notes and provide mutual insight. Trying to achieve much of this via Zoom was certainly almost impossible. Different approaches were tried, but nothing beats a quiet word – in person – over a cup of coffee. Whether ultimately this mattered is a different matter, but it is something to note for the future.

I hope the community does indeed feel confidence that, where interdisciplinary outputs were submitted, they were judged fairly and that overall interdisciplinary research is in a good place. However, I feel there is scope in the process to tighten the relationship between what outputs are submitted and what institutions say about supporting interdisciplinary research, to ensure that warm words at a high level are actually translated into a feeling of confidence at the researcher’s level. I would like to take this opportunity to thank all those members of IDAP, in either phase, who made the work of the advisory panel satisfactory and rewarding, and also the team at Research England, whose support of our work was so meticulous and constructive.