7 Min Read

The What, Why and How: What's AI? Why should I care? How is it being regulated?

Read More

By Isabella McMeechan

|

Published 21 August 2023

Overview

The AI hype is intense. Regulators worldwide are on edge, and there's a lot of contradictory information. One the one hand, AI has been proclaimed as an existential threat to humanity, and decried for disinformation in recent elections. On the other, it has been touted as the answer to all our problems with the British Computer Society publishing a letter calling for AI to be seen as a "force for good".

AI - is it good, is it bad? Is it a panacea, or are we opening pandora's box? This series aims to help you cut through the headlines, explaining what AI is, what regulators are doing about it, and what it all means for you and your business.

What is AI?

AI is already in use by multiple businesses, but there's no official legal definition of 'Artificial Intelligence' in the UK. Numerous authors and organisations have had a go at defining it – and all are along the lines of systems or machines which emulate human-like intelligence. Although not codified in law, the UK Government's AI white paper of 29 March 2023.) defines AI as systems by reference to two characteristics: 'adaptivity' and 'autonomy'.

There are also several types of AI, but the one that's been dominating recent news is 'generative' AI. ChatGPT is a popular example – the "G" in GPT stands for generative. This is technology which, fed on huge volumes of data, conducts 'self-supervised' learning using algorithms and various models to refine and generate content. The output of generative AI algorithms is often referred to as synthetic data or synthetic media.

The ability of models like ChatGPT to use data in this way has led to the recent explosion of interest in AI – and both its possibilities, and its legal and other risks.

Why should I care?

Whilst the debate continues as to whether AI is a positive or negative force, there seems to be a consensus that it's the (next?) big thing. And that it needs to be regulated.

Both AI itself, and how it is regulated, have the potential to vastly impact individuals and businesses. AI could create significant business efficiencies, and technological advances; any company not using AI could find itself left behind. But it comes with risks, and brings with it the possibility of reputational harm, and being sued, fined, or even imprisoned if the AI is used for criminal purposes. Some key legal risks, which could impact businesses using AI, include such AI:

  • breaching data protection laws by using personal data inappropriately (with the potential for £17.5m+ GDPR fines)
  • breaching confidentiality laws by misusing confidential information
  • breaching competition laws by facilitating algorithmic price fixing or collusion, with or without the knowledge of the user (with the potential of fines totalling up to 10% of global worldwide group turnover)
  • breaching copyright and other IP laws by copying and using unlicensed content, which may include using copyrighted materials in the algorithm's dataset (several court cases are currently grappling with this very issue)
  • breaching cyber communications and computer misuse laws by being hacked and used to generate malware
  • causing negligence claims by e.g. developing a product which leads to financial loss or physical damage

AI, particularly generative AI, presents unique challenges in terms of who is responsible for such breaches. Regulators are frantically trying to account for this, leading us to ask…

How is AI being regulated?

Regulators around the world are now focusing on AI and, unsurprisingly, taking different approaches. We have seen these ranging from more reactionary such as Italy banning use of Chat GPT due to data and privacy concerns, through to the European Union's seemingly cautious approach of regulating hard and regulating early, all the way to the UK's relatively laissez faire approach – at least, for now.

In the UK, the government is taking a "pro-innovation approach" to avoid "heavy-handed" legislation, as can be demonstrated by the AI white paper (linked above).  The Competition and Markets Authority's (CMA) initial review suggests the regulator wants to move quickly to develop guidance and regulation. Whilst the CMA wants AI to  be accessible to businesses, it will also focus on supporting fair competition, promoting innovation and protecting consumers, no doubt learning from the mistakes made by other regulators in other, related, technological markets.  

We are closely following developments in this space, and will keep you up to date with the What Why and How of AI. The next article in the series will look to the future of regulation: examining  'human rights' type principles and their application to AI regulation, existing guidance around AI development and use, reporting and risk assessments, data labelling, and the future of regulatory oversight.

For more on this topic, check out our other articles on AI, including AI investigations and claims gather pace in the United States, and The ICO launches an AI and data protection risk toolkit.

Authors