by Emma Hamilton
21 May 2024
I think we would all agree that the tech space is fast moving and AI even more so – just a 10 second glance at LinkedIn today has flagged up the AI Safety Institute’s Fourth Progress Report (Fourth progress report | AISI) and the announcement of the AI safety cooperation between the UK and Canada (UK-Canada science of AI safety partnership – GOV.UK (www.gov.uk)). On the 21 May the AI Seoul Summit kicks off, co-hosted by the UK and the Republic of Korea. Things are moving at an incredibly exciting pace.
With this in mind, it feels a little outdated of me to be issuing an update on guidance that was released nearly 2 months ago but bear with me…
Although the new(ish) AI guidance is targeted at the HR and recruitment industries, it’s a worthwhile read for all companies considering introducing AI systems into their business.
The recruitment industry is one easily impacted by societal shifts and the financial stability of the country. Covid and the years that followed were a perfect example – from complete stagnation in the majority of sectors to the “great resignation” shortly after, with people (me included) having changing priorities, changing career aspirations, and changing expectations of their work/life balance. A flood of people looking to change roles naturally leads recruiters to have to deal with a flood of data and has no doubt led to a number of them considering if there’s a better/easier/quicker way to be sifting through applicants to find the best of the best.
Queue “AI Tech”. AI, whether in HR and recruitment or in any other business, has the potential to create greater efficiency, scalability and consistency but with it, brings a number of new challenges to be overcome. Where a system offers decision-making capabilities, it also brings a risk of bias. Where a system creates an opportunity for immediate access, it risks exclusion for those unable to access it or without the skills to make use of it. Where a system targets ideal candidates, it risks discriminatory approaches. This has led to the Department for Science, Innovation and Technology issuing its “Responsible AI in Recruitment Guidance” (available here: Responsible AI in Recruitment guide – GOV.UK (www.gov.uk)).
The guidance reiterates the UK Government’s AI regulatory principles from the 2023 White Paper “A pro-innovation approach to AI regulation” (being safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress) and sets out some helpful questions to be asking when sourcing an AI system. Whilst we may think these steps only need to be taken when making a big switch in technology, many existing software providers may now be using AI within their service offering so it would be worth asking these questions of existing service providers too, and updating your contracts with them if needed.
The guidance goes on to consider some of the more difficult questions around AI such as considering whether the system could amplify bias (some historic AI tools have been found to be sexist or discriminatory in their output, for example), many of which would be based on the training data used to create the system, and the steps taken by the software developers to overcome these biases. Data protection is of course another important consideration, both in terms of ensuring that candidate and client data is secure and in terms of how the system helps (or hinders) compliance with other data protection principles such as keeping on top of data retention periods and correcting or deleting data when required.
The guidance makes it clear that risk assessments will be key, relating to both the data protection impact and the AI system in general. We have been stressing the same point to clients over the last few months. Knowledge is power, so they say, and understanding where your vulnerabilities are at the outset will be incredibly important to making the most of new technological developments whilst keeping your risks to a minimum.
In my view the guidance creates a great start point for businesses, whether in the HR and recruitment space or otherwise, taking companies through the initial considerations, providing sample questions and examples, and giving some direction to helpful resources. I particularly like the “Example use cases and risks” in Annex A which sets out the risks associated with specific types of software that may be used in recruitment. Will it address all the concerns of every business? No, but as the guidance says, there can be no one size fits all approach. Is it useful? Absolutely. I have no doubt that there will be many more guides for other industries to come to.