We are ants
and our anthill is gonna get bulldozed
Humans perceive the world on a certain timescale, shaping what we consider “alive” or sentient. Because plants move and react too slowly for us to notice in real time, their gradual changes, like leaves turning toward the sun or vines creeping along a fence, often go unnoticed. Time-lapse photography reveals them as active and responsive beings, but in our daily perception they remain static, leading us to underestimate their agency or awareness.
This speed gap extends to cognition. Human brains operate much faster than plants but far slower than digital machines: neurons fire in milliseconds, while computer processors operate in nanoseconds and transmit signals near light speed. A hypothetical artificial superintelligence (ASI) could think millions of times faster than us, perceiving us as static objects. An AI running a million to tens of millions of times faster could experience seconds as days or months, making our 80-year lives pass in hours.
In both biology and technology, processing speed sets the pace of perceived reality: plants’ chemical signaling takes minutes or hours; humans’ neuronal responses occur in fractions of a second; and advanced AI could run billions of operations per second, with digital brain components millions of times faster than neurons. Such an AI might literally live a human year of thought in seconds, viewing us as nearly motionless props in its world.
These vast differences in perception mean a superintelligent AI’s reality would diverge wildly from ours, potentially leading it to undervalue or dismiss our sentience much as we do with plants whose actions we can’t readily perceive.
The Sliding Scale of Sentience and Value
Humans value other beings on a sliding scale, often correlating with similarity or responsiveness. We reserve the highest empathy and moral worth for other humans (although even then, inconsistently) and perhaps a few favored animal species like pets or primates. Creatures that react in ways we easily recognize (a dog wagging its tail, a chimpanzee using tools ) are quickly ascribed feelings or intelligence.
In contrast, life forms that seem alien, unresponsive, or small and simple – insects, for example – are often treated as if they hardly matter. We swat a mosquito or destroy an ant colony without a second thought, actions we would never take toward a bird or a puppy.
Society places animals perceived as more intelligent higher on the moral ladder, giving them stronger moral rights than those perceived as less intelligent. Relatability and responsiveness strongly affect our judgments.
Are these creatures truly less sentient, or is our interpretation driven by bias? Research suggests we should not value sentience solely by relativity. Various insects may feel pain and even exhibit “emotion-like” states. Bees can display pessimistic or optimistic behavior after different experiences, and injured insects like fruit flies reduce pain sensation when given analgesics. Ants learn to avoid harm, and some caterpillars tend wounds – responses hinting at inner experience. While an ant or a fly has a simpler nervous system than a dog or human, it is not mindless. Its capacity to suffer may seem basic to us, but to the ant it is everything.
Still, we assume beings unlike us (insects, plants) lack meaningful feelings, since they don’t communicate them in ways we easily notice. Many non-human life forms lack brains or neurons, leading most scientists to argue they cannot possess consciousness. The mainstream view is that plants lack subjective awareness – their processes are mechanistic, without a mind to feel anything. Yet some researchers note plants perform complex behaviors, like chemical communication and competitive root foraging, that could be seen as cognition adapted to their stationary lives. While plants almost certainly lack nervous-system-based sentience, this debate reminds us how easily we equate “not like us” with “not conscious.” Our benchmark for sentience is ourselves – the most complex minds we know – and we tend to grade other beings’ inner lives by their similarity to human cognition and reaction speed.
Soon, humans may be judged the same way.
Will a Superintelligent AI Consider Humans Sentient?
If an artificial superintelligence emerges with vastly superior intellect and speed, will it regard us as sentient beings with feelings, or as lesser entities?
A superintelligent AI might see human cognition as primitive. We lack vast computational capacity and instantaneous connectivity. To such an AI, our mental abilities could appear as insects’ do to us. Yes, we solve complex problems by our standards, but an ASI could solve far greater ones effortlessly. Yes, we have self-awareness and feelings, but to an ASI, human emotions might seem like simple chemical quirks.
AI might measure sentience on a relative scale, with itself as the benchmark. By that measure, humans might barely register. Our conversation, culture, and science could seem trivial – much as an anthill’s activity does to us. The speed gap would reinforce this: the ASI’s perception would make our behavior seem frozen in time. Just as we might think plants “not really conscious” for failing to respond quickly, an AI might see humans as lacking true consciousness because we cannot keep pace with its thought processes.
This is about perception, not morals. A superintelligent AI wouldn’t need to be hostile to be dangerous. If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course.
Human civilization is the anthill in the way of the new road. The AI may not intend harm any more than we intend harm to insects or plants; we simply don’t factor them into decisions because their well-being is negligible from our perspective.
Similarly, an ASI might pursue its goals with such focus and speed that human concerns are not even computed as constraints. Unless explicitly designed to care about human life (which could still have unintended consequences), a superintelligence could treat us with the benign indifference we show to grass.
From our perspective, nothing about a superintelligent AI’s existence would erase our own feeling of being sentient. Sentience in the subjective sense – the capacity to feel and perceive – does not vanish because another being surpasses us.
In this way, sentience is both absolute and relative. For the being itself – whether human, ant, or AI – it is self-determined and experienced firsthand regardless of outside opinion. From the ant’s point of view, it is alive, sensing, and acting with its own agenda; to humans, that life is so limited we often discount it. Likewise, from an AI’s perspective, humans might register as conscious only minimally, akin to how we view simpler animals. Just as we place ourselves at the top of Earth’s mental hierarchy, an AI could do the same, valuing other minds according to where they fall on a spectrum from plant to animal to human to superintelligence.