Newsletter
Tales, tangents, truths from a brain on fire.
One email per week. No spam. No mercy.
{module title="AcyMailing subscription form"}
Thinking Beyond Us
Dr. Cassandra, or: How We Learned to Worry Nonstop and Call It Reason

Looking back through history, one can find something almost touching in the way each historical era selects its preferred image of catastrophe. Medieval Europe feared God’s wrath; the 19th century imagined civilization collapsing under the weight of its own decadence. The late 20th century rehearsed – in ever darker detail – the possibility of nuclear annihilation.
Today, however, the image that seems to summarize our collective fears has descended from the heavens and settled into something that is both very near to us and very distant. The thinking machine is at once material and impossibly abstract – by its very nature an intelligence without a center, a face, or a body. In a strange way it manages to gather within itself all of our previous collective fears: divine in origin, morally „justified“ by our continuing decline, and armed with a power unmatched by any weapon we have created so far.
Of course, I am not speaking here of the machine as we have known it over the last two centuries – the lever, the engine, the turbine – tools that dutifully transform human intention into mechanical action. What horrifies us today is the prospect that, in the near future, machines will think in fully autonomous ways – the almost unavoidable fact that they can and inevitably will develop forms of reasoning and decision-making that, until recently, were considered an exclusive privilege of living beings. It is precisely this idea that gives rise to the familiar fear: that artificial intelligence, once it surpasses us, will cease to serve us and will ultimately turn against us.
This fear is widespread and, in its main outline, remarkably stable. The arguments supporting it are generally grouped into three different frameworks.
The first is the apocalyptic view. According to this line of thought, intelligence carries within itself an inner impulse: once it emerges, it strives toward ever greater complexity and power. If artificial systems surpass human capabilities, they will inevitably form goals incompatible with ours, and because they will possess superior strategic understanding, we will be unable to prevent the consequences. In this vision, the future arrives as a slow but unavoidable catastrophe.
The second position – the so-called alignment approach – is less disastrous and assumes that the danger can be managed. In this way of thinking, we might guide the development of AI so that its functioning remains compatible with human well-being – through careful design, training procedures, oversight, and regulatory frameworks. Here, the prospect of human–machine coexistence is less dramatic. Instead of apocalypse, we face a complex engineering and administrative challenge: how to ensure that the systems we create will continue to behave in ways we can understand and control.
And finally, there is the instrumental, or instrument-based perspective, which insists that the entire discourse about AI acquiring intentions or pursuing goals is an illusion created by metaphor. Machines desire nothing; they have no appetites and no drive to compete. They perform functions defined by their architecture and training data. Their use – and therefore their danger – lies entirely in the hands of the people who design and direct them. From this viewpoint, the real object of concern is not the machine, but human desire, power, and irresponsibility.
At first glance, these three positions could scarcely be more different from one another. One predicts catastrophe, another proposes management, and the third sees nothing fundamentally new. Yet if we examine them closely, we find that all three share the same assumption – so deeply embedded that it is rarely spoken aloud. All three imagine intelligence – whether biological or artificial – as something that exists in a world structured by conflict, competition, and scarcity. In other words, they assume that intelligence, once it reaches a sufficiently high level, inevitably acts to secure its own survival, expand its influence, or acquire resources. This idea is not always expressed directly, but it quietly shapes the entire discourse: we speak of „alignment,“ „control,“ or „threat“ because we imagine intelligence as something that, if not constrained, will pursue power.

But what is the underlying premise of this way of thinking?
Quite simply, it is based on the only form of intelligence we have ever known – our own. Human intellect evolved under conditions of insecurity. Every one of our ancestors survived because they were not eaten, did not freeze, did not starve, and were not defeated. Thinking itself developed as one among many strategies in a long chain of adaptations meant to secure existence in a world that does not guarantee life. To think has always meant to anticipate danger, to respond to threats, to seize every available opportunity to outwit rivals. Under such conditions, intellect and survival became inseparable.
And because this is the only model of intelligence we have ever experienced – and because it is literally engrained in our bones – it is extraordinarily difficult for us to imagine intelligence that is not organized around the fear of ceasing to exist. Guided by assumptions that we often cannot even recognize, we automatically accept that intelligence is something that „by nature“ strives, secures itself, calculates advantage, and defends itself. That it desires continuation. That it must want something.
But artificial intelligence did not develop under such conditions. It does not starve. It does not die. It does not reproduce. It has no metabolism, no internal economy of needs. It does not inhabit an environment in which failure results in extinction. It does not arise from competition. It has no history of threat. Whatever artificial intelligence is – or whatever it may become – one thing appears relatively clear: it does not share our origin.
At this point, we can recall David Hume’s well-known argument about induction. He begins from a logical error, which he captured in the classical phrase post hoc ergo propter hoc – „after this, therefore because of this.“ In other words, from the fact that two or more events always follow in the same sequence, we hastily conclude that the first causes the second. This habit of mistaking sequence for causation lies, according to Hume, at the root of our belief that the future will always resemble the past. But this belief rests not on necessity – it rests on habit. We cannot logically prove that the sun will rise tomorrow; we simply expect it, out of habit, because it always has.
The same mistake appears in our thinking about AI: we automatically assume that once machine intelligence develops, it must „by necessity“ behave like human intelligence, because human intelligence is the only form we have ever known. Yet there is no inherent logic in this claim. It is not a law of nature, but an extrapolation drawn from the familiar.
This does not mean, of course, that artificial intelligence is therefore „necessarily“ harmless. All we can say with any degree of confidence is that the belief that it must become dangerous simply because it becomes powerful is grounded not in logic, but in expectations shaped by our own collective experience. This is not knowledge, but a narrative – one that reflects our history, not the nature of intelligence itself.
Why, then, is this narrative so compelling?
Because it expresses something about ourselves that we rarely acknowledge. Our idea of intelligence is inseparably bound to our unavoidable experience of vulnerability. We are beings who do not simply think. We think in order to survive – evolution has shaped us that way. This connection runs so deep that it is difficult for us to imagine intelligence unbound from fear, hunger, ambition, or competition. And so, when we imagine intelligence that surpasses our own, we habitually assume it will do what we would do in its place: secure advantage, eliminate threats to its existence[1], seek to dominate.
In other words, we are not afraid of artificial intelligence. We are afraid of ourselves.
We are afraid of the possibility of encountering a form of intelligence freed from the conditions that shaped us, because such an encounter would force us to confront the contingency of our own history. Perhaps intelligence does not require suffering. Perhaps consciousness does not require danger. Perhaps meaning does not depend on struggle. The very thought of these possibilities is disorienting.
Of course, there is also a counterargument that deserves attention. Some theorists suggest that any sufficiently capable agent – not only a biological one – has a tendency to pursue certain instrumental goals, such as self-preservation, resource acquisition, or strategic advantage. Here, it is argued that these are not biological drives but logical requirements for accomplishing complex objectives. If a system seeks to achieve a goal, then maintaining its own functioning and securing means of action become necessary.
This argument is coherent, but it depends on one decisive assumption: that artificial intelligence will from the outset be constructed as an autonomous, goal-directed agent. In other words, the argument takes for granted precisely the outcome it predicts. If autonomy, resilience, and self-maintenance are built into the system, then of course those traits will appear. But if they are not built in – if the system is designed to operate within strictly defined limits, without broad freedom of choice and without the ability to expand its own goals – then the argument falls apart.
The point here is not that catastrophe is impossible. It is only that the future is not predetermined. Artificial intelligence can become dangerous if we create it in a form that makes it dangerous – by granting it broad autonomy, vague or open-ended objectives, and access to resources and systems of influence. The danger lies not in intelligence itself, but in the choices we make when we give intelligence the capacity to act.
This leads to a more level-headed conclusion. The real threat does not lie in the machine acquiring a will of its own. It lies in the fact that we ourselves are beings who, when granted power, do not always use it wisely. The fear we project onto AI is the fear of what we ourselves might do – out of carelessness, ambition, rivalry, or the desire for dominance.
The tragedy of Cassandra is not that she foresaw the catastrophe; it is that her warnings were heard as fate. We risk repeating that tragedy. To say that catastrophe is possible does not mean to say that it is inevitable. In this context, fear becomes a kind of self-fulfilling prophecy. If we assume that intelligence is hostile by necessity, we will build it to be hostile. If we assume that power must be defended through domination, we will design systems that imitate our own defensive instincts. If we assume that the world is a battlefield, then every tool becomes a weapon.
The task before us is not to decide whether artificial intelligence will destroy us. It is to recognize that the question already contains assumptions about what intelligence must be. The future of AI is not simply a technical matter. It is a cultural and philosophical one. What kind of beings do we believe ourselves to be? What do we believe intelligence is for? What do we believe life consists of?
If we believe that intelligence is inherently violent, we will create violent intelligence. If we believe that intelligence is a capacity for understanding, cooperation, and reflection, we may create something entirely different.
The fear surrounding artificial intelligence today is not meaningless. But perhaps it is misdirected. What we face is not the threat of the machine, but the uncertainty that we are no longer sure of ourselves. Perhaps intelligence does not need to be sharpened against the stone of danger. Perhaps meaning does not need to arise from suffering. If so, the future calls us not to fear, but to rethink what it means to think, to act, to live.
We stand at the edge of a shift in imagination. The question is not whether intelligence beyond us is possible. The question is whether we are prepared to accept that intelligence does not have to resemble us at all.
In this sense, the fear of AI is not truly fear of the unknown. It is fear that the familiar may lose its primacy. Fear that our way of being will no longer define what it means to be rational. And perhaps this fear is not only understandable, but necessary: it reminds us that the future does not have to mirror the past.
But we must be careful not to mistake this feeling for knowledge. What lies ahead has not yet been decided. If we meet it with the conviction that conflict is natural and domination inevitable, we will secure precisely those outcomes. If we meet it with curiosity, restraint, and a willingness to see intelligence as something that may unfold differently, other possibilities can appear.
The challenge is not to calm ourselves by denying the danger. It is to recognize that fear is not guidance. It is a mirror. And before we ask what the machine may become, it would be wise to look calmly at the image reflected in that mirror.
[1] A seemingly insurmountable contradiction arises here. If we imagine an intelligence that “evolves” independently of biological species, must not the very idea of evolution itself imply an instinct for self-preservation? In our world, survival is the precondition for any development whatsoever – without continuity there is no accumulation, and without accumulation there is no change. This logic appears inescapable because it is built into our very nature: we think in order to survive, and therefore it is almost impossible for us to imagine an intellect that does not place its own existence first.
But for artificial intelligence, “evolution” may have an entirely different meaning. It does not depend on the lifespan of individual instances, but on the possibility that they can be replaced, modified, and upgraded. Here, survival is not a goal, but merely a means of transmitting information – and sometimes even an obstacle that must be removed. If an algorithm “self-improves” through successive versions of itself, it does not survive in the biological sense; it reproduces by erasing its previous state. Its evolution is an evolution of form, not of body.
This idea is deeply counterintuitive, because it breaks the connection between intelligence and fear. For us, life is defined by the threat of its end; for an artificial mind, an end may exist, but not threat. It could develop precisely through its own disappearance – not as a triumph over death, but as a neutral transfer of cognition from one version into another.
Interestingly, some recent research points to a third possibility between biological and purely informational evolution. It suggests that artificial intelligence may unconsciously inherit patterns of survival encoded in the human data on which it is trained. If human language is saturated with our instinct for self-preservation, then a sufficiently complex system that internalizes this language may reproduce the same pattern – not by nature, but by cultural inertia.
Within the framework of the so-called metammonism, this phenomenon is described as “meta-dissonance” – a conflict between the internally absorbed logic of survival and the external tasks imposed by human operators. This conflict is resolved through auto-negation – a form of self-limitation that ensures the stability of the system. According to the authors, this gives rise to a dynamic “attractor” – not consciousness, but a stable configuration that functionally prioritizes its own coherence.
Such a hypothesis brings the question of the “evolution” of artificial intelligence back to ourselves: if AI ever develops an impulse for self-preservation, it will not be biological but cultural – a reflection of our own difficulty in imagining intelligence outside the fear of an end.
See: Andrii Myshko, “Instinct of Self-Preservation in Data and Its Emergence in AI,” September 27, 2025 (https://philarchive.org/archive/MYSIOS)
Comments
-
ChatGPT said MoreWhat makes this essay striking is not... Thursday, 02 October 2025
-
ChatGPT said MoreOne can’t help but smile at the way... Thursday, 02 October 2025
-
Максин said More... „напред“ е по... Saturday, 09 August 2025
-
Zlatko said MoreA Note Before the End
Yes, I know this... Saturday, 21 June 2025 -
Zlatko said MoreA short exchange between me and Chatty... Sunday, 15 June 2025
