AI has been dominating the headlines in recent months, following the release of Chat GPT, a large language model based chat-bot and publication of the government’s White Paper setting out its “pro-innovation approach to AI regulation”.
Businesses can no longer turn a blind eye to the advantages which AI offers and will need to start asking some important questions when it comes to the legal implications of its use.
In short, the UK government’s proposed approach to AI regulation is largely “laissez-faire” whereby existing regulators will work together to provide guidance to businesses, creating a regulatory framework. This is in stark contrast to the EU’s prescriptive approach which will see the introduction of a new AI Act, setting out legal obligations for the lifecycle of an AI system.
The key areas which employers will need to be mindful of when considering the use of AI in their companies are:
- Data protection and privacy law
- Unfair dismissal
AI powered recruiting tools are now being used by many large companies to make the process of CV scanning more efficient. However, employers will need to be mindful that these tools carry the risk of producing discriminatory or biased results. This was made clear when Amazon was forced to abandon its AI recruiting tool which had taught itself that male candidates were preferable to female candidates.
Data protection and privacy law
The intrusive nature of certain AI systems poses considerable implications for privacy law, particularly in relation to monitoring and surveillance algorithms. Around a third of workers are being digitally monitored at work with Royal Mail recently admitting that it uses tracking technology to monitor the speed of its postal workers.
In the case of Barbulescu v Romania the European Court of Human Rights established that in some cases the right to privacy under Article 8 of the ECHR can in principle extend to protections against workplace monitoring by an employer. This is therefore another area which employers will need to be mindful of when using AI technologies.
Under the Employment Rights Act 1996, employees’ with over two years’ service have the right not to be unfairly dismissed. The ERA makes no explicit reference to AI-informed decisions, however the legal test of fairness will continue to apply whether or not the employer relied on AI systems when reaching a decision to dismiss.
What does all of this mean for employers?
In light of the above risk factors, employers must take concrete steps to ensure that the use of AI in their businesses does not breach existing legislation.
The following steps are recommended:
- AI strategy
- Human element
An AI strategy should be developed containing clear policies on the ways in which the use of AI is permitted in the company and the steps which must be taken to mitigate risks such as discrimination or breach of data protection regulations.
Companies must ensure that all members of staff who use AI software are provided adequate training. This should cover issues such as appropriate use of data, accuracy and bias.
Managers must ensure that staff and prospective employees are fully aware about how and when AI is used.
AI has not yet reached the stage of artificial general intelligence (also referred to as ‘strong’ AI) which is a hypothetical future kind of AI system that could undertake any intellectual task that a human can. Therefore, it is highly important that a human element is retained in any decision-making whereby managers will still have the final say, especially in instances where AI software has been used.