There are a few reasons why research into the validity of assessment within legal professional qualifications in England & Wales (largely) hasn’t happened, these include:
• Profession-led regulation – with its tendency towards a more holistic, some might say, less critical, approach to assessment
• Competition between commercial legal education providers – which has meant that examples of good assessment practice, if evaluated, have tended not to be shared outside the organisation
• A lack of public interest – unlike with other professions (teachers and doctors spring immediately to mind), most people have very little to do with the legal profession. Put crudely, there aren’t many votes to be won from putting the legal profession under scrutiny.
Why does that matter?
The concept of validity has evolved since the 1800s (it’s not new!) and the best description I have heard of validity is the Ronseal Test – does the assessment do what it says on the can?
This of itself is definitely not rocket science, but what assessment theory does is force regulators and assessment developers to articulate exactly what they want an assessment to achieve, and set out how they will evaluate whether that objective has been achieved through the assessment.
So if you don’t have validity and validation firmly and robustly embedded in your assessment development and evaluation, you probably can’t tell whether your assessments are doing what they set out to achieve…which makes them at best open to attack, and at worst, pointless!
No way back – some real-life examples
When I led the development and introduction of the Qualified Lawyers Transfer Scheme for the SRA in 2009/10, we took the advice of experts in the legal and medical sector (notably Kathy Boursicot and Paul Maharg, and subsequently Eileen Fry and Richard Wakeford at Kaplan) and utilised metrics which enabled the SRA to evidence the validity of the QLTS assessments.
This was a game-changer – as a regulator, once you appreciate how you can use assessments to evidence how you meet your regulatory objectives, there’s no turning back!
For example, a QLTS Assessment Board will have the reliability of every question, group of questions and assessment presented to it and Board Members will be able to determine from the data provided (Cronbach’s Alpha together with the Standard Error of Measurement (SEm)) how reliable the test is at ensuring the test-takers are competent.
But as common sense will tell you, validity is more than statistics and data, it is also concerned with qualitative judgements about whether the assessment is testing what it needs to test. So one might expect the assessments relating to a qualification which gives an individual rights of audience in the lower courts (i.e. the SQE), to assess whether they are competent to exercise these rights. It sounds so simple…
So whether you are a regulator or an assessment developer, creating the assessments; or a law firm or training provider seeking to hold the regulator or assessment developer to account, it’s a good idea to understand what best practice looks like. Accidental (‘pointless’) assessment is (or at least should be) a thing of the past!