Artificial Intelligence (AI) and GDPR
Artificial Intelligence (AI) tools are transforming the way we interact in both our personal and professional lives. Whether it is face-recognition on our smartphone, customised experiences on social media, or online ID verification for a mortgage application – this efficiency has become something we have grown to expect. In a business environment, AI can automate tasks, improve decision-making, support content creation, enhance targeting, and provide valuable insights.
The Dark Side of AI Technology
However, with online technology comes risks such as 'deepfakes' – manipulated videos or images used to create fake but convincing content. These can be used to spread disinformation, defame individuals, or influence public opinion. In day-to-day business environments, AI tools can present new risks and challenges in terms of intellectual property, ethics & human rights, copyright, information security, inaccurate/liable content and data protection.
Key Risks in Business Environments:
Misinformation about individuals, employees or clients
Bias and discrimination during automated recruitment
Disclosure of confidential & commercially sensitive information
Purpose of this guidance: This guidance note supports you in ensuring that you can keep each other and the organisation safe and secure when utilising AI tools – especially when it involves sharing information about you, your colleagues, young people or sensitive company information.
Summary Background
As AI is evolving daily, it is difficult to summarise all the types and impossible to list all application names and uses. Today, some of the main types include:
Machine Learning: The ability of a machine to learn from data and improve its performance, then at a point in its development, learn by itself without programming.
Example: Facial recognition for ID verification
Natural Language Processing: The ability of a machine to understand and generate natural language, such as speech or text.
Example: ChatGPT and other Large Language Models (LLMs)
Computer Vision: The ability of a machine to perceive and interpret visual information, such as images or videos.
Example: Self-driving cars
AI GDPR Compliant Steps
When signing up to new software or apps on social media, online applications, new tech suppliers and AI applications, you should engage consciously with the process:
Evaluation: Make a conscious evaluation on the security and privacy of any AI tool before using it. This includes reviewing their Privacy Policy and Terms & Conditions – what will they do with the information you provide, where do they store it and who will they share it with. Many apps (like ChatGPT) share the data you provide by default.
Compliance: It is a regulatory requirement that you assess and log ALL technology that the organisation engages with, whether that is a web developer, HR platform or AI application.
Reputable: As part of the conscious evaluation, use only reputable AI tools and companies with an established reputation. Avoid tools developed by individuals or companies without an established reputation – unless you have done your homework on their ethics.
Protection: Never upload information that is private and personal, business confidential, proprietary information belonging to the organisation or protected by regulation. This includes information about colleagues, clients or third-party providers. Never share access or login passwords to your AI accounts and keep the software up to date.
Accuracy: Fact check, and fact check again! When using an AI tool to write content, carry out research or design promotional material or videos, all content should be checked for reliability and accuracy. While the output can appear polished and professional, content can often be out of date, biased opinion or simply inaccurate. This can lead to embarrassment, lack of credibility, defamation of character or negative media.
Transparency: For the sake of accuracy and accountability, information created by AI should be referenced as AI-generated.
View or download our FREE 10 Steps for GDPR Compliance using AI in Business here.
Your Responsibility for Security, Privacy and Ethics
Using these steps will help to ensure that the use of AI tools remain safe and secure for your organisation, maximising the benefits of AI tools while minimising the potential risks associated with their use.
The AI Act
The Artificial Intelligence (AI) Act was passed into regulation in 2024. Its focus is to protect the fundamental rights of individuals while promoting responsible innovation. The Act takes a risk-based approach to the use of AI, categorising systems as:
Banned systems
High-risk
Limited-risk
Minimal-risk
No-risk
Requirements and fines get progressively stricter as the perceived risk level rises.
Types of Banned Systems
1. Manipulative AI Systems
Example: An AI system that manipulates elderly users into making unnecessary purchases by exploiting their cognitive vulnerabilities.
2. Social Scoring Systems
Example: A system that gives citizens scores based on their social media activity, shopping habits, or associations, affecting their access to services.
3. Real-Time Remote Biometric Identification in Public Spaces
Example: Live facial recognition cameras scanning crowds at shopping centers or protests without specific justification.
4. Untargeted Scraping of Facial Images
Example: Scraping millions of photos from social media to build a facial recognition database without consent.
5. Emotion Recognition in Certain Contexts
Example: An AI system monitoring employees' facial expressions to determine their emotional state during work.
6. Biometric Categorization Systems
Example: An AI system analysing facial features to determine someone's ethnicity or political leanings.
While EU guidance is still taking shape, we know AI presents new risks in terms of intellectual property, ethics & human rights, copyright, information security, inaccurate/liable content and data protection, which could include:
Liability for misinformation
Bias and discrimination
Disclosure of sensitive information
Recommendations for Good AI/GDPR Governance
Ask the question – who is currently using AI and what is their use?
Identify the potential opportunities and efficiencies for the use of AI
Produce a Guidance Document or Policy for the safe use of AI
Carry out specific training / information sessions
Carry out a DPIA (Data Protection Impact Assessment) prior to integrating / introducing a new system
Integrate AI applications into the Record of Processing (ROPA) & Inventory Management
Assess the ongoing use, as machine learning grows
We advise our clients on all aspects of AI, privacy and data protection.