The Real Danger of Artificial Intelligence

ChatGPT has set off a panic, not only amongst some educators, worried that it will encourage plagiarism (or perhaps even call into question the nature of authorship) but amongst media prognosticators and a few maverick tech mandarins (whom I always suspect of raising alarms only in order to raise share prices) that AI is coming not only for academic integrity, but our very humanity. They are not wrong to worry. A long history of science fiction dystopias have painted a picture of uncaring machines turning on their creators. Moreover, people who know a lot more about the science (forget about the fiction) plausibly speculate that an artificial intelligence would likely have very different motives than a natural intelligence, motives that we might find malignant but it would find normal. Hans Moravec, a robotics engineer and prophet of the post-human age argues that just as technologically advanced human societies conquered and exploited less technologically advanced societies, so too will an artificial superintelligence likely eliminate the fleshy form of life as inferior and irrelevant. Nick Bostrom, a leading transhumanist philosopher likewise warns his more cheery transhumanists that there is no guarantee that a superintelligent machine would care one whit for the joys and moral principles of human beings. I take these warnings seriously, but I also think that the nightmare scenarios that they paint of coming robot wars tends to distract from a less spectacular but probably more dire (because more probable) threat that the further development of AI poses.

Part of that threat concerns job markets: now that middle class intellectual and professional careers are threatened by AI they are desperately ringing the tocsin. The stoicism that they preached to generations of manual workers faced with technological unemployment is noticeably absent in their pleas to governments to start to regulate and restrain further AI research. Their hypocrisy aside, this side of the danger is real, and twofold.

On the one hand, we still live in a capitalist society where most of life’s necessities are commodities. In order to access the goods and services that we need we require an income, and for most of us that income means selling our labour power. That side of the problem could be addressed if the surplus value produced by our labour were collectively controlled rather than privately appropriated as profit. If collectively produced wealth were democratically controlled, we could rationally reduce socially necessary labour time. Surplus wealth would then create the conditions for everyone to enjoy more free time. Needed goods and services would be publicly funded and available on the basis of need. The realm of freedom, as Marx put it, would expand in proportion to the reduction of the realm of necessity (of having to labour for the sake of survival and development).

However, the second side of the problem would not be solved, and might even be exacerbated, if the liberatory promise of technological development were realized. The problem here is existential rather than social or economic. The technotopian dream behind the development of AI is to collapse the difference between freedom and necessity. Ray Kurzweil, the author of The Singularity is Near, argues explicitly that the emergence of machine intelligence is a new plateau of evolution. He interprets evolution in teleological terms as tending towards higher levels of integrated complexity and intelligence. The logical end point of this development is omniscience– God is not a transcendent spiritual reality but the future outcome of the development of life.

In Kurzweil’s view, human beings are but a stepping stone on the way to the emergence of omniscience. Artificial Intelligence is the necessary next step. Out of humanistic concern for well-being, he argues, we must have the courage to let our creations unfold along their own evolutionary path. Our transhuman present will become a posthuman future. There will no longer be flesh and blood human beings, but instead, our consciousness will be preserved within the neural networks of the superintelligence– God– that succeeds us.

One might be tempted to dismiss this speculation as utopian theogony and not science, but I think we have to examine carefully the way in which it understands human values and the good for human beings. As I argued in both Embodiment and the Meaning of Life and Embodied Humanism: Towards Solidarity and Sensuous Enjoyment, the real danger of technotopian arguments is not that they might be true at some distant point in the future, but that they change how we understand human intelligence, human relationships, and the good for human beings in the present. Although Kurzweil and other technotopians claim to be the inheritors of the humanist values of the Enlightenment, they in fact understand human intelligence and the good for human beings in machine terms. Consequently, they fail to understand the essential importance of limitations– another word for necessity- in human life.

Think of the importance for our psychological well-being of feeling needed. One of the signs of serious depression leading to suicidal thoughts is the belief that the world will be better off if one kills oneself because no one needs you. An effective therapeutic intervention involves convincing the person that in fact others do need them. But why does anyone need anyone else? Because we are limited beings; we cannot procure everything that we need to live through our own efforts; we cannot endlessly amuse ourselves but need to talk to others; the objects of our knowledge lie outside of ourselves and we must work to understand them. So too the objects of our creative projects: they must be built from materials with their own integrity which might not be receptive to our designs. We must therefore work to realise our ideas and have to have the strength to bear failure and the humility to change plans. The good for human beings emerges within this matrix of material necessity. The difference between having a real and an imaginary friend is that we have to work on ourselves to convince other people to like us.

Kurzweil wants, in effect, to abolish this difference. Once material reality has been absorbed by virtual reality there will no longer be a meaningful difference between real and imaginary friends. In a real and not metaphorical sense all friends– in fact, all of reality– will be a function of the imagination of the superintelligence. Since for Kurzweil everything, including inanimate matter, is information, nothing essential would be lost once the material is replaced by the digital simulation. We only hang on to this metaphysical distinction because our minds– our information processing capacity– remains attached to a needy body that depends upon connection to nature and other people. But that archaic metaphysics is maintained by fear: as the Singluarity approaches we must have the courage to die in our fleshy body to be reborn– as St. Paul said– in our (digitized) spirit body.

Just as love of one’s neighbor can easily be converted into a divine command to destroy the enemy, so too transhumanist philanthropy can become a war against what is most deeply and fully human. That is the real danger: that artificial intelligence will re-code the way that we understand our evolved and social intelligence and cause us to prefer the former to the (much more subtle, rich, and complex) later.

Science has long generated metaphorical ways of understanding life. Aristotelian science understood living things as active souls shaping passive matter; in the Enlightenment this conception gave way to a mechanistic understanding of life (as, for example, notoriously expressed in La Mettrie’s epochal Man a Machine (1748). Today that metaphor is giving way to the metaphor of life as information and intelligence as information processing. Since information processing is just what computers do, it is no exaggeration to say that we are coming to understand ourselves as a reflection of the machines that we have built. Whether or not they turn on us, Terminator like or not, they will kill something essential in us if that metaphor takes hold to the extent that we start to think that our intelligence is solely in our brain and our brain is an information processor.

I am not denying that the advances made by AI researchers are not real or much of our intelligence can be captured by computational models of neural activity. But that which makes human intelligence distinct from machine functioning is that it is inseparable from caring, meaningful relationships to the environment. We are not brains in vats, (as Hilary Putnam entertained in a famous thought experiment) but living intelligences standing in meaningful relationships to the natural world, each other, and the universe as a whole. As Teed Rockwell shows in his brilliant book Neither Brain nor Ghost, we cannot understand what brains do if we abstract their activity from the embodied whole of which they are a part. What we see, feel, etc. are not unique functions of the discrete activity of brains but are shaped by the whole nervous system in complex relationships to the world. And– as Marx argued, presciently in the 1840s– the senses themselves are affected by historical and social development. Would Aristotle hear music or unbearable noise if he were brought back to life and taken to a rock concert?

Thus the real danger of further AI development is that it will cause us to dehumanize ourselves and off- load more and more forms of meaningful activity and relationships to a virtual world. And I have no doubt that barring some global catastrophe that collapses social institutions, this result will come to pass (despite my best efforts in Embodiment and the Meaning of Life and Embodied Humanism). Talk of regulating AI development is nothing more than hot air. If researchers are forbidden from pursuing their projects in one jurisdiction another will make itself available. The perceived economic and military “benefits” are simple too alluring for governments to seriously pass up. (I say “perceived’ because, as economic historian Robert Gordon has shown, the last decade of the computing revolution has not produced the expected rise in labour productivity).

Whatever the real or imagined benefits, as the technologies become more ubiquitous they will reshape our social relationships. Hartmut Rosa shows (in Social Acceleration) how a technology that is disruptive to one generation becomes the new normal for a later generation. Opposition to technologically driven social change quite literally dies out.

Old school humanists like me might fret at the loss of spontaneity and risk in social life, but a person born today will not understand the value of spontaneity and risk if they grow up in world where they expect all uncertainty to have been programmed out of existence. And that leaves me with a question that I cannot answer (well, perhaps I can, but do not like what I think that the answer might be): are the values of embodied social existence really universal and ultimate (as I have argued) or are they relative to an undeveloped technological era, perhaps to be admired by future cyborgs in the way we can appreciate the beauty of Aristotle’s hylomorphism without believe that it is true?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.