Tess Frazier is the Chief Compliance Officer at Class. She’s built her career in education technology and believes a strong compliance, data privacy, and security program benefits everyone.
Tess Frazier is the Chief Compliance Officer at Class. She’s built her career in education technology and believes a strong compliance, data privacy, and security program benefits everyone.
As artificial intelligence (AI) continues to transform industries, many organizations deploying high-risk AI systems must now comply with both the EU’s Artificial Intelligence Act (AI Act) and the General Data Protection Regulation (GDPR). These regulations, while distinct, share common principles, particularly around transparency, fairness, and accountability. A key area where they intersect is the requirement for organizations (referred to as "deployers") to conduct Data Protection Impact Assessments (DPIAs) when AI systems process personal data, especially in high-risk scenarios.
Under the AI Act, deployers are obligated to carry out a DPIA for high-risk AI systems. These systems are often used in sensitive areas like recruitment, employee management, and performance evaluation—cases where automated decision-making can have significant consequences on individuals. The DPIA serves as a structured process to assess the potential risks that AI might pose to individuals’ rights and freedoms and is mandated under Article 26 of the AI Act, particularly where personal data is involved.
In this context, a DPIA evaluates the likelihood and severity of risks associated with deploying AI. It ensures that deployers take proactive steps to mitigate risks before implementing high-risk AI systems. The GDPR’s Article 35 also outlines specific situations where DPIAs are required, such as when large-scale processing of sensitive data, profiling, or the monitoring of public spaces is involved. However, the AI Act focuses more on AI-specific risks, and its requirements for DPIAs must be seen through this lens.
High-risk AI systems, as classified by the AI Act, often involve automated decision-making with significant implications for individuals, including, but not limited to:
Such systems are deemed high-risk because of their potential to impact fundamental rights, including privacy, non-discrimination, and freedom from bias. A DPIA helps deployers evaluate whether the benefits of using AI in these contexts outweigh the potential risks to individuals.
The first step in a DPIA is to describe how the AI system processes data, its purpose, and the potential risks it poses to data subjects. This includes specifying the system's intended use, any potential biases, and the level of human oversight involved. Under the AI Act, deployers must also ensure the input data is representative and appropriate for the intended purpose, highlighting the importance of transparency in the system's design and operation.
Deployers must demonstrate that the use of AI is necessary to achieve their objectives and that there is no less invasive method to reach the same goal. For example, if an AI system is used for automated decision-making, the deployer must justify why the system is needed and whether it provides better outcomes compared to manual processes. This assessment must be balanced against the potential risks to individual privacy and data protection.
A DPIA requires the deployer to systematically assess the risks AI systems pose to individuals' rights and freedoms. This includes evaluating potential harms, such as bias, discrimination, and lack of transparency in automated decisions. For instance, an AI system used in recruitment might inadvertently introduce gender or racial bias. The DPIA helps identify these risks early and proposes measures to mitigate them, such as improving data quality, enhancing transparency, or introducing more robust human oversight.
Deployers must also implement appropriate safeguards to mitigate identified risks. These may include data minimization techniques, pseudonymization, enhanced security protocols, and regular audits of AI system performance. Staff training and clear processes for human review are also essential, especially where automated decisions have significant consequences. Moreover, deployers must ensure that AI systems are regularly monitored and updated to adapt to new risks as technology and its applications evolve.
While both the AI Act and GDPR emphasize the need for DPIAs, they approach risk differently. Under the GDPR, DPIAs are required when certain thresholds are met, such as processing sensitive data or monitoring individuals on a large scale. The AI Act, however, focuses specifically on the risks posed by AI systems and the deployer's role in managing these risks. When deployers act as controllers, determining the means and purpose of data processing, they must take full responsibility for performing a DPIA under both the GDPR and AI Act.
The AI Act goes further by mandating specific risk mitigation measures for high-risk AI systems. These include human oversight mechanisms, cybersecurity safeguards, and explainability standards for how AI systems make decisions. The AI Act also requires providers of AI systems to supply detailed instructions to deployers, ensuring they understand the system's limitations, accuracy, and potential risks.
A DPIA is not a one-time exercise. As AI systems evolve or new data is introduced, the DPIA must be revisited to ensure it reflects current risks and circumstances. Continuous monitoring is essential to ensure that the AI system continues to function as intended and that any emerging risks are promptly addressed. The involvement of a Data Protection Officer (DPO) is critical, as they provide guidance on how to mitigate risks and ensure compliance with data protection laws.
Additionally, the AI Act emphasizes the importance of transparency and public accountability. While some parts of a DPIA may remain confidential due to trade secrets or other sensitive information, publishing elements of the DPIA can foster public trust by demonstrating that risks have been carefully assessed and addressed.
The interplay between the AI Act and GDPR underscores the need for a comprehensive approach to data protection in deploying AI systems. As high-risk AI systems become more prevalent, DPIAs are crucial tools for ensuring that organizations not only comply with legal requirements but also protect individual rights. By conducting thorough DPIAs, deployers can identify potential risks, implement effective safeguards, and promote transparency and fairness in AI deployment.
Ultimately, a well-executed DPIA not only helps mitigate risks but also builds trust with stakeholders by demonstrating a commitment to ethical AI practices. For organizations using AI, aligning with both the AI Act and GDPR is not just a regulatory requirement—it’s an essential step in responsible AI governance.
Tess Frazier is the Chief Compliance Officer at Class. She’s built her career in education technology and believes a strong compliance, data privacy, and security program benefits everyone.
Tess Frazier is the Chief Compliance Officer at Class. She’s built her career in education technology and believes a strong compliance, data privacy, and security program benefits everyone.
Get our insights, tips, and best practices delivered to your inbox
Sign up for a product demo today to learn how Class’s virtual classroom powers digital transformation at your organization.
Features
Products
LMS Integrations