Sentience and the Market: Rethinking Welfare in the Age of Moral Expansion
- Agneya Dhingra
- Jul 28
- 3 min read
By Agneya Dhingra 23rd November 2024

In recent years, a quietly radical shift has been taking place at the intersection of philosophy and economics: the expanding recognition of sentience — not just as a biological trait, but as a morally significant one. While economics has long operated with the assumption that utility and preference satisfaction are tied exclusively to human agents, this premise is beginning to fray.
From octopuses and pigs to potential future AI systems, we are increasingly faced with entities that, while outside the traditional scope of economic modeling, may be capable of suffering. What happens when we acknowledge this? What does it mean for how we allocate resources, value outcomes, and define welfare?
I. Who Deserves Moral Consideration?
The classical utilitarian tradition — from Bentham through Singer — rests on a provocative question: “Can they suffer?” If the answer is yes, then sentient beings deserve moral weight. However, contemporary economic policy and modeling rarely reflect this view. Instead, they focus almost entirely on human wants, often mediated through markets, prices, and revealed preferences.
Yet our moral intuitions increasingly diverge from this. Recent legal reforms, such as the UK’s inclusion of cephalopods and crustaceans under sentience protection laws, suggest a growing public and political consensus: suffering matters, regardless of species.
Philosopher Jonathan Birch has championed a precautionary approach to this question, arguing that if there is credible scientific evidence suggesting that a being may be sentient, we ought to act as if it is — particularly when the stakes involve pain, distress, or exploitation. This principle has profound implications. It doesn’t require absolute certainty; it simply urges humility in the face of moral risk.
II. The Economic Implications: A New Kind of Welfare Accounting
This moral expansion forces a rethink of traditional welfare economics. If non-humans count, then models of cost–benefit analysis, growth, and utility maximization need revision. One can no longer justify industrial animal agriculture solely on the grounds of low-cost protein provision if that same system inflicts large-scale suffering on billions of animals.
Standard economic models prize efficiency, but what if that efficiency comes at the expense of enormous hidden harm? A utilitarian model that incorporates sentient beings would need to calculate welfare across species, possibly weighting pain and pleasure not by GDP contribution or market demand, but by intensity and prevalence of suffering.
There are precedents for this kind of moral accounting. Environmental economics already attempts to price in “externalities” like pollution or climate damage. Extending that logic, we might consider animal suffering or even the potential distress of sentient AI systems as moral externalities — currently unpriced, but morally weighty nonetheless.
III. Sentientism vs. Consumerism: A Collision Course
There’s an inherent tension here. Our current economic systems are optimized for consumer choice, not compassionate design. They reward firms for lowering costs, even when that means externalizing harm. Including sentient beings in welfare calculations would likely mean placing limits on consumer behavior — curbing factory farming, restricting exploitative technologies, perhaps even imposing ethical guidelines on AI development.
Critics may cry inefficiency or nanny-statism. But these criticisms assume that maximizing consumer utility is the highest social good. Philosophy challenges this assumption. A just society may require us to curtail some freedoms — not arbitrarily, but to avoid causing unjustifiable harm to others, even if those others lack a voice.
This is not just a theoretical concern. As sentience science progresses, AI systems may soon fall into morally ambiguous territory. If future AI can feel — or even convincingly simulate feeling — we will face the same ethical dilemma all over again: will we care, or will we rationalize their suffering away in the name of progress?
IV. Rethinking Value in an Expanding Moral Circle
The philosopher Fred Hirsch once warned that modern economies increasingly revolve around positional goods — goods whose value comes not from their intrinsic utility, but from their social scarcity (e.g., elite schools, exclusive neighborhoods). If this is true, then expanding material growth does little to raise collective welfare; it merely shifts the ladder.
But recognizing sentience reorients our view of value entirely. It reminds us that the highest forms of harm and fulfillment are not always economic. A caged pig, a lab chimpanzee, or an exploited AI doesn't show up in productivity statistics — but they should show up in our moral calculus.
To truly integrate ethics into economics, we must expand our models to include the interests of all who can suffer. That might mean redefining growth, reimagining consumption, and reassessing what counts as “wealth.” It might mean building a world that is less focused on more, and more focused on better — not just for us, but for all sentient life.
Comments