
Reading Montesquieu on campus — a reminder that law must be rooted in context, not code.
What happens when we ask a machine to make law?
That question lingered at the periphery of my academic life for years — quiet but persistent. It surfaced in seminars, resurfaced in courtrooms, and returned, uninvited, in moments of stillness. It asked not merely about the role of AI in governance, but something far deeper: can a system without conscience ever deliver justice?
The paper, Context Matters: Why AI Fails at Lawmaking, was born from that question — and from the slow, necessary realization that law is not an algorithm, and never has been.
The Moral Imagination of Governance
Artificial intelligence, with its dazzling promise of speed and scale, has become an unlikely legislator. From the automation of policy review to predictive judgments, algorithmic systems are being folded into the spine of statecraft. The appeal is understandable: fewer costs, faster decisions, cleaner workflows.
But what do we lose when the language of justice is rewritten by systems that do not — and cannot — understand its moral weight?
The law is not a dataset. It is a living record of conflict, negotiation, grief, and repair. It holds within it the sediment of social struggle, the hesitation of ethical doubt, and the memory of dissent. Governance, if it is to retain its legitimacy, must be morally interpretable. Yet AI, built on probabilistic reasoning, offers only reflection — not interpretation. It replicates the past; it does not respond to the present.
Law Is Not a Pattern — It Is a Struggle
One cannot study law for long without confronting its essential paradox: it must be stable enough to protect, and fluid enough to adapt. That paradox is precisely where AI fails.
Take the COMPAS algorithm — a risk assessment tool used in the U.S. criminal justice system. Its outputs, praised for precision, were found to disproportionately flag Black defendants as high-risk. The issue wasn’t technical. It was moral. The algorithm had inherited the biases buried in historical data and returned them with statistical confidence. AI does not err as humans do — it calcifies what history has already failed to correct.
This failure is not incidental. It is structural. AI does not discern. It does not interpret. And interpretation is the heart of law.
A Glimpse of Judgment: Garrow’s Law
It was during the late stretch of this inquiry that I returned to Garrow’s Law — the BBC courtroom drama based on the life of 18th-century barrister William Garrow. I say “returned,” because it had first been introduced to me by my supervisor, Professor Suma Athreye, whose keen sense of intellectual storytelling saw in it a mirror to the very questions I was pursuing. Watching Garrow advocate not just within the law but against it — challenging custom, cross-examining power, and reclaiming the voice of the accused — I came to understand something essential: that law is never neutral performance. It is ethical risk, narrated in public.
AI, for all its linguistic fluency, cannot pause to listen, hesitate in doubt, or act in defiance of pattern. It cannot advocate as Garrow did — with moral imagination, strategic improvisation, and the courage to speak in the face of institutional silence. It can emulate legal syntax. But it cannot perform judgment.
Why Context Is the First Principle
In this paper, I draw from thinkers like Aristotle, whose notion of phronesis (practical wisdom) reminds us that equity lies in context-sensitive judgment, not in universal rule. I revisit Dworkin, who saw law as a coherent narrative of moral principles, not as an inert system of commands. I engage with Montesquieu, whose vision of law as a reflection of the “spirit” of a people — not just a structure of control — has long guided my thinking, even on quiet afternoons reading The Spirit of the Laws on campus. And I return to H.L.A. Hart, whose distinction between primary and secondary rules helps frame the problem: AI can replicate the structure of law, but not the legitimacy of it.
It became clear that AI’s problem is not just that it lacks transparency — it lacks telos. It has no purpose beyond pattern. And that is not enough to govern human lives.
Writing the Paper: A Personal Reckoning
This paper emerged from a long arc of inquiry across law, philosophy, and the critical study of technology. It was shaped decisively by my academic journey, and by the rare intellectual generosity of Professor Suma Athreye whose rigorous mentorship urged me to pursue not the convenient argument, but the necessary one. What began as a critique of computational governance evolved into an inquiry into the moral architecture of law itself.
In every sentence, I sought to balance accessibility with gravity. I wanted the piece to be readable to the non-specialist — but I also wanted it to demand something: a pause, a question, perhaps even a refusal.
The Challenge Ahead
The question, ultimately, is not whether AI should have a role in governance. It already does. The challenge is to limit its domain — to ensure it informs, but never substitutes. The risk is not that machines will replace lawyers or judges, but that we will let them replace our struggle with judgment.
Governance is not about procedural mimicry. It is about ethical reckoning. It is the patient, painful act of deciding what kind of world we wish to live in — and who we are willing to be responsible to.
No machine can do that. And no machine should be asked to.
References
Aristotle (2009). Nicomachean Ethics (Trans. David Ross). Oxford University Press.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica.
Dworkin, R. (1986). Law’s Empire. Harvard University Press.
Hart, H. L. A. (1961). The Concept of Law. Oxford University Press.
Hobbes, T. (1651/1996). Leviathan (Ed. Richard Tuck). Cambridge University Press.
Montesquieu (1989). The Spirit of the Laws (Ed. Anne M. Cohler, Basia C. Miller, and Harold S. Stone). Cambridge University Press.
Olivia Ruhil (2025). Context Matters: Why AI Fails at Lawmaking. AI & Society, Springer Nature.
Garrow’s Law (2009–2011). BBC Television Series. Creator: Tony Marchant. Production: BBC One.
Read the full article:
Context Matters: Why AI Fails at Lawmaking
AI & Society (Springer Nature), 2025
Author:
Olivia Ruhil
School of Public Policy,
Indian Institute of technology, Delhi