From Information Fiduciaries to AI : Minding the Gap of Trust
This paper revisits fiduciary relationships in the context of digital platforms and artificial intelligence (AI), evaluating whether fiduciary duties retain relevance within an AI-driven socio-technical environment. It outlines the core features of fiduciary law and examines how interpersonal and institutional trust are reshaped by digital platforms, drawing on the EU Digital Services Act (DSA) (2022) and the UK Online Safety Act (OSA) (2023). Building on Balkin’s (2016) theory of information fiduciaries, the paper highlights and analyses a broader transformation of trust across platforms and AI systems. It argues that the latter may widen the trust gap as users increasingly over-rely on AI and move further away from traditional fiduciary relationships. While the DSA and OSA seek to enhance user trust through strengthened transparency and accountability duties, a distinct regulatory shift emerges in the EU AI Act (2024). By emphasising AI trustworthiness, the Act risks decoupling trust from its moral foundations, potentially fostering misplaced trust (O’Neill 2018), distributed trust (Botsman 2017) or lazy trust (Myskja & Steinsbekk 2020). In conclusion, the paper critiques the EU AI Act’s deprioritising human trust, advocating for enhanced individual rights and citizen participation in AI governance, to mitigate trust gaps and the declining role of fiduciary relationships.
| Item Type | Article |
|---|---|
| Identification Number | 10.1080/13600869.2025.2602111 |
| Additional information | © 2025 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/). |
| Keywords | artificial intelligence (ai), information fiduciaries, trust, social sciences(all) |
| Date Deposited | 16 Feb 2026 10:32 |
| Last Modified | 21 Feb 2026 00:03 |
