AI and data protection – The first steps in regulation
Artificial Intelligence (AI) is rapidly transforming many aspects of business and society, prompting regulatory bodies to establish guidelines and frameworks to ensure its ethical and responsible use.
Two significant developments in AI regulation are the European Union's AI Act and the UK Information Commissioner's Office (ICO) guidance and resources which are the initial steps towards our own regulatory framework.
ICO Guidance on AI and Data Protection
The ICO has been actively involved in providing guidance on the use of AI, with a focus on data protection and privacy. The ICO's guidance addresses how to apply UK GDPR data processing principles to AI systems, emphasising fairness, transparency, and accountability in AI decision-making processes.
There are some new concepts and terminology which the ICO are focusing on at this stage:
· Dark Patterns: This refers to deceptive design tactics used in online environments to subtly manipulate users' decisions, often leading to negative consequences such as compromised privacy or consumer exploitation. These patterns are not new but have gained renewed attention from the ICO, due to their potential impact on user autonomy and decision-making. The ICO's focus on dark patterns highlights the need for organisations to design AI systems that respect user autonomy and adhere to the UK GDPR data processing principles.
· AI-as-a-Service: This refers to the delivery of AI capabilities through cloud-based platforms, allowing organisations to access and implement AI technologies without developing in-house expertise. The ICO's guidance highlights the importance of ensuring data protection compliance when using AI-as-a-Service. This includes adhering to the data processing principles of fairness, transparency, and accountability, as outlined in the UK GDPR.
· Recommender Systems: These are AI-driven tools that suggest products, services, or content to users based on their preferences and behaviours. The ICO's guidance on recommender systems focuses on ensuring that these systems operate transparently and fairly. Organisations are advised to clearly explain how recommender systems work, including the logic behind recommendations and the potential impact on users. The guidance also stresses the importance of addressing biases and ensuring that recommender systems do not result in discrimination or unfair treatment of individuals.
· Data Protection by Design and Default: Organisations should embed data privacy and information security into the design and development of AI systems from the outset. This includes ensuring that AI systems are transparent, fair, accountable, and that individuals have control over the processing of their personal data.
The ICO's guidance also highlights the importance of conducting Data Protection Impact Assessments (DPIAs) for AI systems that pose a high risk to individuals' rights and freedoms. DPIAs help organisations identify and mitigate potential risks to data protection before deploying AI systems.
The EU AI Act: Risk-Based Regulation
The EU AI Act adopts a risk-based approach to AI systems used or developed within the EU. It categorises AI systems based on their potential impact on fundamental rights.
The Act prohibits certain AI practices deemed unacceptable, including social scoring by public authorities, such as that practiced in China, and real-time remote biometric identification systems in public spaces for law enforcement purposes.
The four-tier risk system within the EU AI Act is:
- Unacceptable Risk: AI systems that pose a clear threat to safety, livelihoods, or rights, such as social scoring by governments, are prohibited.
- High Risk: AI systems used in critical areas like healthcare, law enforcement, and transportation must comply with strict regulations, including risk assessments and data quality checks.
- Limited Risk: AI systems with limited risk are subject to transparency obligations, such as informing users when they are interacting with AI.
- Minimal Risk: Systems with minimal risk, like AI-enabled video games, are largely exempt from additional regulations.
A European AI Office will be established to oversee the enforcement and implementation of the EU AI Act. It includes a complex governance structure with multiple entities, including national authorities and market surveillance bodies, responsible for enforcement and oversight.
The EU AI Act enters into force across all 27 EU Member States on 1 August 2024, and the enforcement of the majority of its provisions will commence on 2 August 2026. The EU are encouraging uptake of an AI Pact, a voluntary initiative encouraging AI developers to comply with the Act's obligations ahead of its full implementation.
The work of the ICO and the EU AI Act represent significant steps towards establishing a structured and ethical approach to AI development and deployment, balancing innovation with the protection of fundamental data privacy rights and safety.
CSRB will be keeping up to date as the EU AI Act is implemented and the ICO’s guidance evolves. By stripping the jargon away, we help you navigate the minefield of terminology that surrounds data protection and AI. Please get in touch with us
here or call 0117 325 0830 to learn more about how our certified data protection practitioners can support your organisation.