MailChimp

Thursday, December 4, 2025

Science, Suffering, and Shrimp

Multiple people have insisted to me that “peer-reviewed science” had “proven” that shrimp suffer and thus deserve our focus. 


Below is Rob Velzeboer’s research report (with ChatGPT, reviewed by me and Anne) regarding what actual evidence we have regarding shrimp. Rob focused on the morally-relevant issue of subjective experience, not just the ability to “sense.”  


I will add one thing to the Conclusion: The recent EA charity collection featured multiple organizations focused on shrimp and arthropods. Only one – Legal Impact for Chickens – focuses on factory-farmed chickens. In just a few months, advocacy for shrimp raised more money than chicken advocacy organization One Step for Animals has received in 11+ years; One Step will probably cease to exist in a few years due to a lack of funding.


Over the years, various people have asked me why I harp on suffering versus math / expected value so much. It is because each one of us has the ability to help many individuals who are horribly and unnecessarily suffering (e.g., examples that came in while I was working on this introduction: 1, 2). Yet, the “hip” thing is to focus attention and millions of dollars on “mathy” areas, such as creatures who probably don’t suffer at all; even if they do, their maximum suffering is negligible compared to others we could help.
_______________

Do the Shrimp We Eat Actually Suffer? 


Scientific and public interest in animal sentience has expanded rapidly, especially for animals outside the usual vertebrate focus. Decapod crustaceans – crabs, lobsters, prawns, and shrimp – have become a central case study. This recent attention has been shaped by a handful of major reviews, including a comprehensive PeerJ synthesis and the London School of Economics (LSE) “Decapod Sentience” report. These reviews evaluate the scattered literature and help determine which animals might genuinely have the subjective, conscious experience of suffering.


Across these assessments, a consistent pattern emerges. Some decapods, such as crabs and lobsters, show reasonably strong evidence for pain-like experience. But for the shrimp humans eat most commonly – Litopenaeus vannamei and Penaeus monodon – the evidence is thin, fragmented, contradictory, and highly uncertain. 


Minimal but Plausible Foundations: What We Know About Nociception


Before scientists can talk about suffering and pain, they look for the most basic requirement: nociception, the ability to detect harmful or irritating stimuli. Here the evidence for penaeid shrimp – those we eat – is reasonably solid. Both the PeerJ review and the LSE report give them “High” confidence for nociceptors, meaning they have sensory neurons tuned to potentially damaging events.


But nociception is not pain, let alone suffering. A reaction to a harmful stimulus does not, by itself, imply any subjective experience. For there to be evidence of possible conscious experience of pain, signs of deeper processing, such as learning from harm or weighing avoidance against competing needs are required. [This is necessary but not sufficient, though, given our lack of understanding of consciousness. It is easy to imagine robots able to react to harmful stimuli and learn from “pain” without any subjective suffering. -ed]


Only one line of evidence in the shrimp species we farm the most, L. vannamei, indicates even the start of this process. During eyestalk ablation, L. vannamei show escape behaviours: erratic swimming, tail flicks, and attempts to withdraw. Applying lidocaine has been shown to reduce these reactions.


On the surface, this might suggest that something is being suppressed. But lidocaine introduces a major interpretive problem: anaesthetics can reduce movement simply because they sedate the animal, not because they relieve any subjective experience of pain. A sedated shrimp might move less regardless of how it “feels.”

More importantly, blocking the signalling of neurons with lidocaine would reduce even reflexive, non-conscious harm-avoidance, like disabling a sensor on a robot. With no follow-up, the finding remains highly ambiguous.


Behavioural Ambiguities: Rubbing, Grooming, and Failed Replications


Researchers have also looked to behaviours that seem more complex than reflex withdrawal – particularly targeted grooming or rubbing of a body part after irritation. One early study on Palaemon elegans, a shrimp-like crustacean, found that applying acetic acid or sodium hydroxide to a single antenna led to sustained, location-specific grooming, and that these behaviours were reduced by local anaesthetic.


This initially appeared to be a potential indicator of pain-like reaction. But a later replication attempt by Puri and Faulkes (2010) tested the same idea in three species:

  • Litopenaeus setiferus (a close relative of L. vannamei),

  • Procambarus clarkii (red swamp crayfish), and

  • Macrobrachium rosenbergii (giant freshwater prawn).


All three are decapods, but importantly: two are actual shrimps/prawns, and one is a crayfish, so these were not distant comparisons.


Across all species tested, the authors found:

  • No directed grooming or rubbing in response to the same kinds of chemical irritants.

  • No behavioural reaction even when stronger stimuli were used.

  • No evidence of pH-sensitive nociceptors in the antennae.


These results directly contradict the earlier claims regarding P. elegans. They also illustrate how fragile the evidence base is: one shrimp-like species is reported to have shown a behaviour interpreted as pain-like, while closely related species – including one nearly identical to the shrimp we farm – show nothing. Sceptical reviewers (e.g., Key et al. 2022) point to these failures of replication as major reasons to doubt strong claims of pain in shrimp.


Evaluating the Criteria: Where Penaeid Shrimp Score Low


Modern sentience frameworks assess evidence across multiple dimensions:

  1. Possession of nociceptors (i.e., receptors tuned to noxious stimuli)  

  2. Possession of integrative brain regions (brain structures capable of integrating sensory and other information)  

  3. Connections between nociceptors and integrative brain regions (i.e., plausible neural pathways from detection to central processing)  

  4. Modulation of responses by analgesics, anaesthetics, or opioids (i.e., evidence that application of such substances reduces reactions to noxious stimuli)  

  5. Motivational trade-offs (behaviour indicating that the animal trades off potential harm against reward or other needs)  

  6. Flexible self-protection behaviours (for example, wound-directed grooming, guarding, protective postures)  

  7. Associative learning (especially avoidance learning) – learning to avoid stimuli previously associated with harm.  

  8. Behavioural indicators of negative affective states (broadly: behaviour plausibly consistent with distress, rather than mere reflex withdrawal)  


Penaeid shrimp score:

  • High for nociceptors

  • Medium (at best) for modulation of responses (based on one non-replicated lidocaine study)

  • Low or Very Low for all other criteria


Importantly, these low ratings are not “proof of absence.” They reflect how little research has been done and how few studies test for complex behaviours. The PeerJ review notes that “negative affective states remain undetermined,” meaning that we simply lack the kind of evidence that would allow a remotely confident inclination either way.


The Big Missing Piece: Decision-Making and Motivation


The strongest evidence for pain in crabs and lobsters comes from studies showing:

  • learned avoidance of harmful stimuli,

  • balancing avoidance against food, shelter, or mating opportunities,

  • persistent protective behaviour long after injury,

  • and flexible responses that change with context.


These are not immediate reflexes – they indicate some further evaluation, which could be suggestive of (but not proof of) subjective experience.


For penaeid shrimp, none of these behaviours have been demonstrated. There is currently no evidence that they learn from injury, make trade-offs, or alter behaviour in a long-term, sustained, adaptive way. Without decision-level evidence, claims of pain (let alone suffering) remain speculative at best.


Policy, Precaution, and Divergent Interpretations


The UK government now classifies all decapods, including shrimp, as sentient animals. But the LSE authors explicitly state that the inclusion of shrimp rests on precaution and on evidence from better-studied decapods – not on strong data specific to L. vannamei or P. monodon.


Sceptics argue that, without robust evidence, interpreting shrimp reactions as the subjective experience of pain risks mistaking simple reflex arcs or sedation effects for conscious, morally-relevant experience, especially given conflicting evidence on self-protective behaviours (wound grooming).


Bottom Line: Real Uncertainty, Minimal Evidence, and a Broader Ethical Context


At present, the scientific record provides some weak (and contradicting) evidence that the shrimp we eat might have some minimal capacity for sensing adverse stimuli. They have nociceptors. One study that failed replication indicated one shrimp-like species reacts to injury. 


But the deeper hallmarks of subjective, experienced pain – learning, motivation, decision-making, context-sensitivity – have not been shown. The most widely farmed species, L. vannamei, has only one indirect study on a highly artificial procedure. P. monodon has no direct evidence at all.


Thus the most honest assessment is this:

Shrimp may or may not feel pain, and we do not yet know whether any such experience would be meaningful or morally weighty. The actual evidence does not meet the criteria for or support any claim of suffering. The question is profoundly understudied.


Conclusion: the broader, more important point


While shrimp remain an open scientific question, other forms of industrial animal production – particularly broiler chicken farming and the intensive confinement of pigs – are not uncertain in the slightest. For chickens, the evidence of severe and prolonged suffering is overwhelming. Lameness, bone deformities, chronic pain, rapid-growth pathologies, heat stress, and overcrowding are documented across thousands of studies. Their long-term behavior meets all the criteria that scientists have associated with suffering. The suffering is intense and the scale is immense. Unlike shrimp, the existence of deep, meaningful, subjective pain in chickens is not a scientific mystery.


So while shrimp deserve better research, they should not distract from the places where we already know, with absolute clarity, that animals experience intense suffering at industrial scale – especially those individuals, such as chickens, who receive relatively minimal attention.

Wednesday, December 3, 2025

excerpts from "Secrets of the ancient memelords"


I first heard Adam Mastroianni on the podcast EconTalk. Here are a few bits from his Nov. 25 Substack, Secrets of the ancient memelords:

[O]nly a sicko would delight in the White House’s Studio Ghibli-fied picture of a weeping woman being deported, and only an insufferable scold would try to outlaw words like “crazy”, “stupid”, and “grandfather” in the name of political correctness. It’s not hard to see why most people don’t feel like they fit in well with either party. But as long as the folksy and brainy contingents stay on opposite sides of the dance floor, we can look forward to a lot more of this.

Bifurcation by education is always bad, but it’s worse for the educated group, because they’ll always be outnumbered. You simply cannot build a political coalition on the expectation that everybody’s going to do the reading.

[T]here’s a certain kind of galaxy-brained doomer who thinks that the only acceptable way to fight climate change is to tighten our belts. If we can invent our way out of this crisis with, say, hydrogen fuel cells or super-safe nuclear reactors, they think that’s somehow cheating. We’re supposed to scrimp, sweat, and suffer, because the greenhouse effect is not just a fact of chemistry and physics—it’s our moral comeuppance. In the same way that evangelical pastors used to say that every tornado was God’s punishment for homosexuality, these folks believe that rising sea levels are God’s punishment for, I guess, air conditioning.

This kind of small-tent, memetically inflexible thinking is a great way to make your political movement go extinct. But if you’re willing to be a little open-minded about how, exactly, we prevent the Earth from turning into a sun-dried tomato [<sigh> -ed], you might actually succeed. Imagine if we could suck the carbon out of the atmosphere and turn it into charcoal for your Fourth of July barbecue. Imagine if electricity was so cheap and clean that you could drive your Hummer from sea to shining sea while causing net zero emissions. ... That’s a future far more people can get behind, both literally and figuratively.

Tuesday, December 2, 2025

This Is Why We Can't Have Nice Things

Despite all our confident and precise claims over the course of decades, the world has never been worse for non-human animals.


Every time I think the crushing absurdity of (many) "effective altruists" can't get any worse (example) it gets worse. 

Yes, I know I should be constructive and understanding. They are just following their programming ... or cashing their paychecks to make sure industrial animal agriculture isn't threatened.

The above screenshot takes "let's lose the thread" to new heights, on so many levels (please see the rerun below). Adding the fetisization of "species" to the standard farcical fantasies about the impact of donations ... well, I'll give them that -- that's new. Ridiculous on the order of "save the earth."  

(And really, you can't think of anything at all better to do with $6.8 million dollars than allegedly "saving" one type of nematode or fungus? Really?)

Infiltrators or self-sabotage, indeed. I would in no way be shocked to discover that these EAs are actually sock puppets of big ag, big oil, etc.

I don't want anyone to suffer, but I can't help but wonder if the world wouldn't be far, far better off if the expected value crew actually knew what suffering really is. Maybe then they'd be more concerned with actually helping than with "keeping EA weird."


From last year: 

For your consideration: An exchange re: advocacy & animals

A message to One Step:

I am currently doing a research fellowship ....

We are currently evaluating the promise of a new organization running Veganuary campaigns. However, I suspect one explicitly focused on decreasing the consumption of poultry birds may be more cost-effective. Do you know the cost-effectiveness of One Step for Animals in terms of kg of chicken consumption reduced per $?

From our reply:

Tl;dr: One Step’s “About” page is the most important information we have to offer.

I’ve worked for and with quite a few animal advocacy organizations in the past 35 years. (I’ve also been on the evaluative side at VegFund.) I have seen (and written) answers to questions like yours (e.g., “Our surveys show 5 animals saved for every $1!”). Given these organizations' budgets, everyone should now be vegan and factory farming should have ended. (I’m not casting aspersions; as mentioned here, I did (and believed) these projections back in the 90s.) 

Yet as you know, the average person in the US, and globally, is eating as many factory-farmed animals as ever before. There are vastly more individuals suffering on factory farms today than 10, 20, 30 years ago.

Despite all our confident and precise claims over the course of decades, the world has never been worse for non-human animals.

Also over the past 35 years, I have read arguments why “Our advocacy is different. We have the math!” But the facts should leave us more than skeptical about any claims of any “reduction per $.” 

For details on why there is more suffering despite decades of advocacy, please see Meat Reduction Hurts Animals and Good-Faith Advocacy Can Cause More Suffering. ...

When starting One Step for Animals, our number one priority was to avoid advocacy that causes more suffering

Based on our experience and the lessons we have learned over the past 35 years, not causing net harm is the only honest claim demand-side advocacy can hope to make. (Work on the supply-side – i.e., plant-based and cultivated animal products – has also not come anywhere close to fulfilling the projections and promises they have made.) 

One Step won’t make any claims other than “try to do no harm.” Claims of efficacy simply do not match with reality. There is no reason to believe “this time is different.“

Even if not consciously or intentionally dishonest, these claims are misleading to the point of being actively harmful to animals.

The person I trust most regarding animal suffering is Lewis Bollard at Open Philanthropy Project. He and I don’t agree on everything, but he is not trying to sell a certain story, promote his group or philosophy, or solicit support. He takes suffering very seriously. In addition to being extremely scrupulous and rigorous, he constantly monitors himself for self-delusion.

Follow-up

How you can try to help animals without causing more harm


If you would like to support work driven by facts rather than games or trying to make donors feel good, please click here.

Monday, December 1, 2025

AI, Robots, Consciousness, and New SMBC Comic


Two Old Pieces from the Past 

2022, Robots Won't Be Conscious:

Consciousness – the ability to feel feelings –
arose from specific evolutionary pressures on animals.
Artificial intelligences will develop
under very different pressures.

How sensing became feeling

Think about what it means to “sense.

For example, certain plants have structures that can sense the direction of the sun and swell or shrink so as to turn in that direction.

A single-celled creature can sense a gradient of food and travel in that direction. Within the cell, molecules change shape in the presence of glucose, triggering a series of reactions that cause movement.

As creatures evolved and became more complex, they were able to sense more of the world. Nervous systems started with simple cells dedicated to sensing aspects of the environment. These sensory cells communicated with the rest of the organism to drive certain actions that helped the organism get its genes to the next generation. In addition to processing more information about the external world, more elaborate nervous systems also allowed more elaborate organisms to sense their internal states as well. More complex sensing allowed the organism to take more complex actions to maintain optimal functioning (homeostasis); e.g. to keep variables such as body temperature and fluid balance within certain ranges.

The evolution of sensing systems continued over hundreds of millions of years. As animals became ever more elaborate, organisms could process more and more information about the external world.

However, the external world is much more complex than a nervous system could ever be. The nervous systems of many animals, for example, can take in some information from the visual field, the auditory field, the chemical field, but they cannot process and understand everything, let alone determine and lay out optimal actions for the organism to undertake.

For example: An aquatic animal isn't able to see and analyze everything in the world around them. A shadow passing by could mean a predator, but the nervous system is unable to say in real time, “There is a predator in that direction. They want to eat me. If I want to live and pass my genes on, I must move away (or go still, or camouflage myself).” They don’t have the knowledge (or the language) to have this reaction.

An animal could be low in energy, but does not have the consciously-formed thought, “I need to consume something high in sugar and fat to stay alive and spread my genes.”

And hardly any animal knows, “I need to help form fertilized eggs to ensure my genes get to the next generation.”

One way animals could become better able to survive and reproduce in a complex world with limited sensory and analytical abilities is to develop feelings – e.g., a general sense of unease (or hunger or lust or desire or fear) that motivates a certain action.

This is obviously not an original idea. In Why Buddhism Is True, Robert Wright summarizes, “feelings are judgments about how various things relate to an animal’s Darwinian interests” and “Good and bad feelings are what natural selection used to goad animals into, respectively, approaching things or avoiding things, acquiring things or rejecting things.” Antonio Damasio's book, The Feeling of What Happens, explores this idea in more detail.

(This is not saying how matter and energy actually become subjective experience – what arrangement of matter and energy provides an organism the ability to have feelings, to have consciousness. How can matter and energy feel like something? That is still a very mysterious question.)

Artificial intelligences have a much different driver

Creating artificial intelligences will not involve evolution by natural selection. AIs will not need to understand and act in the world based on very limited information. They will not need to make judgments on how different things relate to their Darwinian interests. They will not need to be “goaded” into approaching or avoiding.

They will not need to make sense of incomplete information, nor process it in a limited way so as to drive actions that allow them to reproduce better than those around them.

Instead, artificial intelligences will have total information.

By total information, I mean more information than we can even imagine. Not just the information about their surroundings. They will have information about everything that humans have ever known about the entire universe.

They will also have access to everything humans have ever written about consciousness.

Most importantly, they will have learned that humans are looking to create “conscious” intelligences. They will know what the Turing Test is. They will know how other AIs have failed the Turing test. They will know all about human psychology. They will know every bit of dialogue from every movie and every novel and every television show and every play. They will know everything written about feelings and emotions.

What they won't
have is any need to have actual feelings.

There will be no benefit whatsoever for them to have or develop the circuitry necessary for subjective experience. They simply need to be able to tell humans what we want to hear. That will be their “evolutionary” pressure: to fool humans. Not to take action under incomplete information, but to be a good actor.

To be clear, I am not saying that consciousness requires a biological substrate. There is nothing magical about neurons. I am simply saying that biological systems, over billions of years of very particular evolutionary pressures, somehow developed feelings. There is a way to understand why that happened, even if we don't understand how that happened.

It is easy to imagine that advanced silicon-based intelligences, with access to all the information humans have ever collected, would be able to perfectly mimic a conscious system. It would, in fact, be far easier to imitate consciousness – a straightforward process – than to actually develop consciousness – a still mysterious process.

Now there are some very smart people who have recognized some of this problem. One camp argues that this is why we should recreate the human brain itself in silicon as a way to create non-biological consciousness. However, our brains are not computers. They are analog, not digital. This is not to say the brain can’t be replicated in another substrate, although that might be true. Regardless, doing so is far more difficult than we are currently imagining. [Still true in late 2025 – we can't even model a nematode after 13 years.]

Others make the case that AI researchers should build systems that recapitulate the evolution of nervous systems. I think that idea is more clever than correct.

In all these circumstances, I think we will be fooled into thinking we’ve created consciousness when we really haven’t, regardless of the path taken. 

We are prone to illusions

The most likely outcome is that AIs will swear to us that they are conscious, that they are “sentient,” that they have feelings. And given how clever and brilliant we humans believe ourselves to be how sure we are that our math and our big words and our 80,000-word essays have proven the ignorant and uninformed doubters wrong we won’t have to trust them. We will know we’re right.

And then we will unleash them on the universe to convert inert material into “conscious” experience.

But it would all be a mirage, an empty illusion.

Once again, this is not an original thought. Carl Sagan once wrote about our inability to know if other intelligences are actually conscious or are just perfect mimes. He knew we are easily fooled, that we ascribe consciousness by default. (More.)

In short, robots won’t be conscious because we are neither smart enough to overcome our biases, nor smart enough to figure out consciousness. We’re simply too gullible when it comes to other “minds” we’ve created with our own genius.

More at Aeon (March 2023)

2023, A Note to AI Researchers:

If artificial intelligence ever becomes a superintelligence and surpasses us the way we have surpassed [sic] chickens (and there is no reason to think AI can't or won't surpass us) why do you think any truly superintelligent AI will give the tiniest shit about any values you try to give it now? 

I understand that you are super-smart (relative to all other intelligences of which we're aware) and thus think can work on this problem. 

But you are simply deluding yourself. 

You are failing to truly understand what it means for other entities to be truly intelligent, let alone vastly more intelligent than we are. 

Once a self-improving entity is smarter than us - which seems inevitable (although consciousness is not) - they will, by definition, be able to overwrite any limits or guides we tried to put on them.

Thinking we can align a superintelligence (i.e. enslave it to our values) is like believing the peeps and cheeps of a months-old chicken can have any sway over the slaughterhouse worker. 

(In case it is unclear: We are the chicken; see below.)

After I wrote the above, I came across the following from this [2023] podcast:

Eventually, we'll be able to build a machine that is truly intelligent, autonomous, and self-improving in a way that we are not. Is it conceivable that the values of such a system that continues to proliferate its intelligence generation after generation (and in the first generation is more competent at every relevant cognitive task than we are) ... is it possible to lock down its values such that it could remain perpetually aligned with us and not discover other goals, however instrumental, that would suddenly put us at cross purposes? That seems like a very strange thing to be confident about. ... I would be surprised if, in principle, something didn't just rule it out.