Board Expertise: Immediate AI Governance Risks

The rapid evolution and adoption of artificial intelligence (AI) applications has been a significant and continually emergent issue for companies and is one which Boards are only beginning to react to from a governance perspective. In this article, we address both the scope and status of European regulation in this area as well as the significant (and accelerating) immediate areas of risk for consideration and redress by board Directors.

International Regulators have begun to address and respond to the extent of AI adoption throughout the market. The extent and speed of response has varied with the EU taking a more proactive approach towards explicit frameworks.

EU AI Act Overview

The European Union is moving forward with regulation to govern the use of AI within its member states. Known as the AI Act, this proposed legislation aims to create a uniform legal framework for the development, deployment, and application of AI systems. The core pillars of the act are to:

    • ensure safety, protect fundamental rights, and promote innovation;

    • have wide applicability to AI providers and users in public and private sectors; and

    • include AI systems in third countries if output is used in the EU.

The Act categorizes AI systems based on risk levels (unacceptable, high, limited, minimal) and sets obligations for providers and users accordingly. It also introduces rules for 'General Purpose AI' providers.

National regulators are expected to focus especially on the use of high-risk systems such as the use of AI for credit scoring or investment decisions. While the benefits of AI based applications are generally acknowledged, the regulatory downside risks resulting from lack of operational transparency, governance accountability, market manipulation, financial stability and information security are noted as being of significant concern. It expected that there will be reliance on Boards and executive management to appropriately manage the adoption of such systems from a risk, reliance and transparency perspective. This is especially relevant as the current rate and extent of adoption is expected to further increase throughout 2024 and beyond.

Immediate Governance Risks to Address

Given the accelerating pace of AI development and a growing level of implementation of various inhouse bespoke and generally available systems, certain new (as well as elevated existing) risk factors require enhanced Board vigilance and response.

Even for firms with a low rate of AI adoption a clear understanding of the changes in AI related client and third party vendor reliance risks is required.  

  1. Financial Crime:
    New applications of AI systems have the potential to be used for diverse market manipulation, creating new forms of financial crime that boards need to acknowledge and guard against. Additionally, new model capabilities may further challenge the required depth and sophistication of AML / KYC capabilities to ensure this functionality remains compliant and robust versus new avenues of potential criminal misrepresentation. 

  2. Board Regulatory Compliance Challenges:
    As regulations like the EU AI Act come into force, Boards face the challenge of ensuring compliance with complex and evolving regulatory requirements specific to AI. The EU AI Act is the first such step in this direction with subsequent guidance and regulation expected to follow by other national regulators (SEC, UN, FCA etc.).

  3. Information Security and Privacy:
    AI systems often require extremely large supporting data sets, increasing the risk of data breaches and privacy violations. Boards need to ensure that robust data governance and cybersecurity measures are in place and reflect the pace of current change. Data security risks are clearly significantly further increased through any use of non inhouse hosted and developed applications.

  4. AI Governance Expertise Gaps:
    Given the emergent nature and pace of AI deployment, there is a material current gap of relevant governance experience across many regulated and operating Boards. The mismatch between the pace of implementation of AI systems and the capability to effectively govern and control the use of the same requires immediate attention. Additionally, lack of decision-making making transparency, accountability and unclear or ineffective board responses to the changes to such may expose Directors to new forms of legal liability and required standards of governance and control expected from key stakeholders such as investors, debt providers and regulators. These issues are further exasperated by the expectation that the rate of AI implementation will further accelerate (and potentially significantly so) in the coming months and years ahead.

  5. Rapid Technological Change:
    The fast pace of AI development may further extend beyond the ability of many boards to effectively understand and govern its use, leading to potential mismanagement or over operational reliance. In the context of systems applied in risk assessment and / or investment selection and management, boards should rigorously identify the use of such systems both inhouse and by third party service providers, diligence their application and associated governance parameters. The ‘black box’ aspect of many such models further exasperates the need for persistent and constructive challenge to ensure transparency around all relevant decision making.

  6. AI Capability Risk:
    Both current applications and emerging use cases of AI models may fail to produce correct responses or produce unexpected results, especially in unusual market conditions. This could lead to significant financial losses or operational disruptions. Additionally, prevalent leading generative AI such as Large Language Models have embedded incorrect output tendencies (known as ‘hallucinations’) which further emphasises the need for caution, oversight and control in their application. Additionally, with wider market adoption, where AI applications are utilised which have been trained on similar, or indeed in some cases, identical data sets this may lead to actions which are unknowingly highly correlated and potentially price or risk disruptive.

  7. Ethical and Bias Concerns:
    The use of AI in areas raises ethical questions that boards must address for the benefit of all stakeholder groups. There are many aspects to this such as discrimination in decision-making, lack of transparency (‘black box’ output), fair representation of certain groups, handling of sensitive data vs. ethical and regulatory standards, workforce reskilling and displacement, environmental and ESG concerns relating to the energy consumption requirements for such systems, intellectual property concerns vs. AI generative output and training data sets. The extent of concerns and issues arising within this space have grown significantly over the last year even though wide adoption is in many cases still in its infancy and as such focus on current and the near-term evolution of such is important. Corporate bias concerns have been an issue which has been much in focus in recent years. Indeed caution is required in the application of AI based decision making as a result of algorithmic bias through the underlying training these systems require using large historical data sets which may amplify existing biases.

As a result of these issues, boards need to immediately develop a deep understanding of AI applications (both inhouse and external), implement transparent governance frameworks and ensure ongoing application and output risk assessment.   

Implementation Timeline of the EU AI Directive

While still noted as being subject to change versus the rate of adoption of AI based systems, the regulatory timeline currently deployed is as follows:

© Pillar Point 2024

2024

The EU AI Act entered into force on August 1, 2024 and has now entered into its implementation phase.

2025

Starting in 2025, AI systems deemed to be "high-risk" will need to comply with the new requirements established by the AI Act. These include stricter standards for transparency, safety, and explicit human oversight.

2026 – 2027

All other AI applications will also be required to adhere to the AI Act's transparency and information disclosure obligations, even if they are not classified as high-risk.

LinkedIn
Previous
Previous

The Growth of Capital Relief