This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minute read

AI, Boardrooms and the Law: Delegation, Ownership and Human Judgment

The term "corporate governance" generally refers to the legal and factual framework for managing and supervising a company. Its goal is to establish decision-making structures that ensure responsible and effective corporate leadership. The rapid integration of Artificial Intelligence systems is transforming the foundations of corporate governance in multiple ways. As boards, general counsel and compliance leaders consider the impact of AI on established governance models, three central aspects now come to the fore: 

Upholding human accountability: Despite AI’s rise, the law across most jurisdictions still requires that directors and officers be natural persons, so AI can support – but not replace – boards and management.

Adapting diligence and strategy: Integrating AI effectively into strategic decision-making processes requires the development of new standards of care and rigorous assessment, including the application of principles like the Business Judgment Rule to the nuances and complexities of AI tools.

Proactive regulatory compliance: The expanding regulatory landscape means companies must proactively map out how and where AI is being used, implement robust governance and risk management processes, and continually update strategy and compliance documentation to ensure both innovation and legal conformity. 

The following sections explore how these new challenges and opportunities are re-shaping the intersection of AI and corporate governance for leaders and legal advisors alike.

AI and Corporate Management 

Can AI Take Over? – The Limits of Delegation

One of the most intensely debated questions is how much autonomy can be granted to AI systems without violating legal limits on delegation in most jurisdictions.

In most European jurisdictions it is not legally possible to transfer management authority to AI systems. The ultimate decision-making authority and core management functions must remain with human governing bodies. Consequently, under current law, AI systems cannot formally serve as board members, and it is unlikely that this will change soon. However, AI may be used as a board observer without voting rights, as a decision-support tool, or as an analytical instrument for data evaluation.

AI as a Management Tool – The Legal Framework

In jurisdictions where management authority must remain with humans, the discussion focuses on the extent to which AI can effectively exercise a management role under the supervision of a formally appointed human authority. This raises the question of the permissible limits of decision support, taking into consideration the general duty of care (Sorgfaltspflicht) and, for commercial decisions, the business judgement rule, or conversely, the level of human decision-making required, in terms of delegation boundaries.

AI as an Expert – in German-language scholarship in particular, it is argued that boards should treat the results of AI systems similarly to advice provided by human experts, while appropriately accounting for the structural differences between AI Systems and human experts. The so-called ISION decision of the German Federal Court of Justice (BGH) – dealing with the question of the duty of care of directors when involving human experts – provides the doctrinal basis for this approach. In this ruling the BGH developed the so-called "ISION" principles. These principles – careful selection, clear assignment, plausibility review, and independent assessment – now serve as a commonly accepted framework for how boards should engage with both human and AI inputs. When applied to AI, however, these principles require further interpretation, given the specifics of AI systems, particularly due to the black-box nature of AI. 

Beyond mere permissibility, current discussion increasingly points to cases where boards are obliged to at least consider the use of AI in management decision-making. In other words, as AI becomes standard in certain sectors, mere non-use may constitute a breach of corporate duties if it results in decisions being made on the basis of inferior information or analysis. 

These principles and the general limits of delegation do not imply that an AI system must be liable or fully explainable in order to be used for decision support. The important point is that human board members retain ultimate responsibility and fulfil their duties of care when managing AI, including "data management", "model learning", "model verification" and "model deployment". They must also be able to critically assess, interpret and, if necessary, reject AI-generated results requiring a certain level of AI literacy. 

AI and External Compliance Requirements

However, the integration of AI into corporate management does not take place in vacuum. The external legal environment is evolving just as rapidly, and organizations must ensure their internal AI strategies align with emerging regulatory frameworks.

A central aspect of lawful and responsible AI deployment in corporations is the duty of legality (Legalitätspflicht), which obliges boards and managers to ensure that all actions and decisions – including those involving AI – comply with applicable laws and regulations. This duty anchors corporate activity in the legal order and sets the outer boundary for both board discretion ("Business Judgment Rule") and new technological possibilities. 

Specifically, the duty of legality means that the use of AI is only permissible to the extent that it does not violate binding legal provisions – regardless of potential efficiency or innovation gains. 

The requirement extends beyond general corporate law: the evolving regulatory landscape imposes significant additional compliance obligations. Examples include the EU AI Act, which introduces a tiered (risk-based) approach for AI applications and sets explicit requirements for transparency, human oversight, and documentation – especially for high-risk AI systems. Data protection requirements (GDPR), cybersecurity laws (such as the NIS-2 Directive), digital services and markets regulations (DSA, DMA, Data Act), as well as sector-specific rules (e.g., MiFID II, PSD2 in finance) and sustainability regulations (CSRD II, CSDDD) also directly affect the design and operation of corporate AI systems. 

As a result, companies face a complex, multi-layered compliance environment: First, they must identify where and how AI is used within the organization – including both centrally managed and "shadow IT" solutions. Second, a systematic assessment of the legal and operational risks associated with each AI application is required, taking into account risk classification, data processing implications, and ethical concerns such as bias or explainability. Third, organizations need to implement comprehensive governance, protocols, and documentation – including clear assignment of responsibilities, regular review and updating of risk profiles, and evidence of compliance for supervisory bodies and regulators.

In summary, the duty of legality sets the foundational guardrails for AI use in management, but the legal environment is moving rapidly beyond these core principles, with new regulatory regimes now defining much of what counts as "responsible" or even permissible AI deployment in corporate settings.

Conclusion

The discussion shows that AI is fundamentally changing the way corporate governance works. This affects core questions of corporate management and drives transformation across the full spectrum of internal decision-making and technological compliance obligations.  However, as AI continues to develop, its role in corporate decision-making and governance is certain to grow. Rather than replacing human judgment, AI is poised to become an essential element of forward-thinking, responsible management – enabling companies to adapt, compete and innovate successfully in a rapidly changing environment.