AI Impact Assessments: A practical guide for businesses


As AI becomes broadly integrated into enterprise applications and adopted by businesses, impact assessments before the adoption of AI driven technology will be essential – whether as a requirement of law or as prudent risk management and awareness. In this article, we highlight key principles that apply in respect of AI impact assessments.

Why? The key reasons for conducting an AI impact assessment are risk assessment and design planning.

AI impact assessment provides a framework for organisations to identify the potential risk and impact of adopting AI technologies. The objective is not risk elimination, as this may be impractical. Rather, the objective is investigation and consideration of the risks and benefit of the intended technology, measured against the corporate and business objectives in introducing the technology. The outcome will frequently be the adoption of the intended (or similar) technology, but with adaptation, customisation or understanding of the associated risks.

There is no one-size, fits all policy for AI implementation. An AI impact assessment will inform specific requirements for implementation of and rollout of the technology, and the specific features of the policy and guidelines for use and operation of the technology.

When? An AI impact assessment should be undertaken as soon as a credible business proposal is advanced to introduce AI technologies as a core or material feature of:

  • general business operations;
  • a new business operation;
  • a new office or jurisdiction;
  • as part of systematic reviews after adoption.

The first impact assessment must be conducted before the final decision to introduce the technology is taken, and before policies and guidelines are adopted. The purpose of the assessment is to inform the decision and policy. Much of the substance and benefit of the assessment is lost if the assessment is conducted as a fait accompli after the event. Businesses should conduct impact assessment reviews at different stages of using the AI application.

Key elements. An AI impact assessment is an investigative framework to gather information, identify and quantify benefits and risks, and deliver recommendations to minimise risks while maintaining benefits.

  • Investigation: The investigative phase will involve completing a questionnaire or inquiry form that seeks to gather documents and information that will form the factual basis of the assessment. The areas of enquiry will require responses from legal, compliance, technical and social responsibility teams within the business.
  • Risk-benefit analysis: This stage of the process has two elements. First, the risks and benefits of the proposed AI technology are identified and described as objectively as possible. Risks, in particular, should not be limited to purely internal business matters. Then, risks and benefits should be given a weighting and ranking to give some frame of reference of materiality and seriousness. There is inherently a subjective element to this assessment. However, this provides a benchmark for making recommendations.
  • Recommendations: The report will conclude with recommendations. The recommendations should be practical. For instance, the recommendation may be that issues can be eliminated or mitigated by technical design or customisation. Nonetheless, if an issue has core materiality and poses a serious to severe risk that cannot be mitigated or eliminated, then the recommendation must be to reject adoption of the AI technology.

Who? Many businesses are introducing cross-functional AI review teams. In these circumstances, the leader of that team will be the correct executive to lead an AI impact assessment. Even if no AI review team has been established, the executive entrusted with the task of leading the assessment must have both seniority in the management structure and experience in conducting assessments. Experience in conducting privacy impact assessments may provide some grounding in the process involved. However, the subject matter and requirements are different for AI impact assessments.

The key element is that the person conducting the assessment has been granted authority from senior management – ideally, the board of directors – to conduct the impact assessment under a mandate that empowers the assessor to require co-operation across the business operations, and to conduct an assessment and deliver a report without fetter or interference.

Assessment criteria. The key elements of an AI impact assessment are:

  • Purpose. Businesses should identify their specific needs and objectives before adopting AI applications. It is also important to set measurable goals for success of the AI project. Will the proposed technology enhance operational efficiency or reduce business costs? Are the expected benefits sufficiently significant to outweigh potential risks? The relevant AI strategies should also specify the purposes which AI may be used and provide guidance on how it should be used.
  • Safety and reliability. The impact assessment should assess whether the AI application will perform the designated functions consistently without causing harm to users, organisations, and the environment. The impact assessment should review processes to monitor and manage the integrity and quality of the data being used to develop AI applications. Also, there must be an assessment of the human oversight over AI decision-making to ensure the use of the AI application achieves desirable outcomes. The impact assessment should consider proposals for ongoing assessment and review of the AI application, and whether they meet requirements that the use of AI application is regularly monitored and periodically reviewed for safety, security and accomplishment of intended outcomes.
  • Accountability and transparency. The impact assessment should critically review the internal governance structure to oversee the AI application. This assessment will include checking that there is a clear designation of roles and responsibilities for persons who are accountable for the business’ compliance with the relevant AI regulations and requirements. The assessment will review and report on whether and how the business will provide information to customers, regulatory bodies and other third parties about the business’ use of AI. This will include reviewing the approach of the business to reporting in intended purposes and usage of the AI application, the types of data sets being used, and how the AI system has been developed or applied within the business.
  • Privacy. The assessment will look at how the AI application has collected and used personal data in the training and deployment phases of the AI. In particular, the assessment will review and report on whether appropriate data minimisation techniques have been deployed to eliminate, encrypt, pseudonymise or minimise the use of personal data. The assessment will also consider other requirements under applicable laws and regulatory frameworks personal data privacy or data governance generally.
  • Legal compliance. Particular concerns have been expressed by regulatory authorities in a number of jurisdictions in respect of data scraping and similar automated collection of public data sets, which are often conducted without regard for intellectual property or other rights. The assessment will consider the measures that have been taken by the AI provider to ensure that third party rights have been protected in the training and deployment phases of the AI. Many countries are adopting specific legislation to govern AI development and use, and there are also many industry guidelines and regulations that can also apply. The assessment will consider whether the practices proposed by the business will comply with those requirements.
  • Ethics and society. The assessment should also consider how the adoption and use of the AI application will affect a broad range of stakeholders in the business, and the community and society at large. The automation of business processes affects employees, displaces roles and functions in society, and may even have unintended adverse environmental effects. These should all be subject matters for the assessment.

What happens next?

An AI impact assessment report is not a report for the metaphorical top shelf. It is an actionable management tool. The delivery of an impact assessment report is not the end of the process. If recommendations are made, then those recommendations must be formally assessed and acted upon by the business. It may mean revisiting, clarifying and revising certain recommendations. However, the recommendations of an AI impact assessment should generally be adopted to the extent practicable. There should be a formal record of the adoption of those recommendations so that functional business units are mandated to follow the approved recommendations.


The future of AI is inextricably linked to our ability at a granular business and operational level to promote a responsible and ethical use of the technology. Conducting an AI impact assessment is an important first step that businesses should take before it adopts and deploys AI technologies. An AI impact assessment has a complex blend of legal, technical and ethical features. It is an essential tool that provides a framework to gain the insights necessary for businesses to prudently navigate AI adoption.

Padraig Walsh and Stephanie Sy

Pádraig Walsh

Partner | Email

Disclaimer: This publication is general in nature and is not intended to constitute legal advice. You should seek professional advice before taking any action in relation to the matters dealt with in this publication.