
Fiduciary Mandates
Legal duties of loyalty, care, and confidentiality imposed on AI developers and deployers, requiring them to act in users' and society's interests rather than solely maximizing profit.
What it is:
Fiduciary mandates impose legally enforceable duties of loyalty, care, and good faith on one party that holds power over another's interests. In traditional contexts — medicine, law, financial advising — fiduciary duties exist because the relationship is inherently asymmetric: the professional possesses expertise and decision-making authority that the client cannot fully monitor or evaluate, creating a vulnerability that contract law alone is insufficient to address. Extending fiduciary principles to AI would mean that developers and deployers owe legally binding obligations to the users and communities affected by their systems, requiring them to act in those parties' genuine interests rather than optimizing solely for the firm's commercial objectives. Unlike Public Benefit Corporations, which are a voluntary corporate structure that broadens board-level governance, fiduciary mandates are externally imposed legal obligations enforceable by the users and communities affected by AI systems.
Applied to AI, fiduciary mandates would convert voluntary ethical commitments — responsible AI principles, safety pledges, mission statements — into obligations with legal consequences for breach. An AI developer subject to a duty of loyalty could not knowingly design systems to maximize engagement through addictive patterns, deploy automation that displaces workers faster than the affected communities can absorb, or use personal data in ways that benefit the firm at the user's expense. A duty of care would require developers to conduct adequate testing, monitor for downstream harms after deployment, and maintain the technical competence to understand the systems they release. This is a fundamentally different regulatory approach from product liability (which assigns blame after harm occurs) or licensing (which sets minimum standards for market entry) — fiduciary duty imposes an ongoing, affirmative obligation to prioritize the interests of those affected, not just to avoid causing measurable damage.
The challenge:
The challenge is translating a concept designed for one-to-one professional relationships to firms serving millions of users with diverse and sometimes conflicting interests. A physician's fiduciary duty runs to a specific patient; an AI developer's duty would run to all users simultaneously, and what serves one user's interests may harm another's. Defining what "acting in users' interests" means for a general-purpose AI system is inherently ambiguous — and ambiguous legal obligations create litigation risk that could chill development without producing clear public benefit. Fiduciary duties are conservative by nature, requiring the fiduciary to prioritize the beneficiary's existing interests, which could discourage AI deployments that displace workers in the short term but ultimately create more and better jobs as industries reorganize around new capabilities.
Recommended Reading:
Real-world precedents:
Fiduciary duties have governed professional relationships for centuries—physicians' Hippocratic obligations, attorneys' duties to clients, and financial advisors' obligations under the Investment Advisers Act of 1940.
In 2019, the SEC adopted Regulation Best Interest, which requires broker-dealers to act in retail customers' best interest when making investment recommendations — a standard that draws on fiduciary principles without imposing a full fiduciary duty.