“Man–this is the mystery of religion– projects his being into objectivity, and then again makes himself an object to this projected image of himself thus converted into a subject.”
Ludwig Feuerbach, The Essence of Christianity
With that insight Feuerbach hoped to bring us back to ourselves from the religious projections to which we subordinated ourselves. God, for Feuerbach, was nothing but the perfection of the human species– intellect, love, creativity– abstracted from earthly limitations and embodied in the idea of a transcendent being. The perfections attributed to God were nothing but idealizations of our own powers. Critical insight into the human origins of the idea of God would, Feuerbach hoped, transform human life and relationships. If we recognized that the perfections that we worshiped in God were just our own highest potentialities, the narrow egoism and selfishness of earthly life could be overcome by the loving mutuality reserved for our spiritual relationships.
The power of projective abstraction has proven much more difficult to overcome than Feuerbach thought. The twentieth and twenty-first centuries have proven that the need to project our own powers onto a being which we imagine to be independent of ourselves runs very deep. It dominates the scientific mindset as much as the religious. Alongside the traditional religions we thus find today a religion of technology. Like the monotheistic God, worshipers of technology see it as a force independent of individual and collective will, to which individual and collective choice must always bend, because the good is identical to whatever happens as a consequence of untrammeled technological development.
If you think I am drawing specious and superficial analogies, ponder the words of Ray Kurzweil.
In every monotheistic religion God is … described as all of these qualities, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity, infinite love … of course, even the accelerating growth of evolution never attains an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably towards the conception of God, although never quite reaching the ideal. We can regard, therefore, the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking. Kurzweil, The Singularity is Near, p. 389.
Kurzweil is no backwoods preacher fleecing an uneducated flock of their hard earned money. He is a leading computer scientist, inventor, and head of Google’s Artificial Intelligence project. And yet he explicitly, and in all seriousness, identifies the monotheistic god with a future supreme computing intelligence which will redeem us and raise us from the dead. But what he does not realize is that he actually sells himself short in his genuflection before his own creations.
Technology, like God, is not a force independent of human intelligence and activity, but their product. Yet, like the idea of the divine, the actual relationship of dependence is reversed, and the creators subordinate themselves to their own creation, at immense cost.
Kierkegaard argued in his essay Fear and Trembling (a mediation on the story of God’s commandment to Abraham to sacrifice his son Isaac) that divine command produces a “teleological suspension of the ethical.” That fearsome phrase just means that God can command us to set aside ordinary human conceptions of right and wrong for the sake of the higher good of obeying His will. The problem is (and Kierkegaard understood this, although it did not change his mind) that only God knows what the higher good served by obeying his will is. Hence, from the human perspective, we are left in an absurd situation: having to renounce our own ethical duties for a higher good we cannot possibly know. What we do know is that violating the ethical norms will cause harm, but we do it– if we have faith– just because it is what God commands.
Do not our ruling technotopians council the same? Never reflect about the values that we want our society to embody but always do that which it becomes technically possible to do. By fiat, the benefits will always outweigh the costs. Whatever harms technological development causes will be cured by more technological development. The responsibility of politicians and people generally is simply to adapt and obey the priest-class that produces the marvels.
Behind these injunctions to adapt is the real driver of capitalist society: economic competition. Individual firms must strive to increase productivity, to produce more product in less average time. Technological innovation decreases socially necessary labour time, decreases per unity costs, and thus (other things being equal) increases profit. That is not to say that every technological development is a mechanical reflex of economic forces, or that science is nothing but ideology. It does help explain the reason why no labour saving innovation is ever rejected by capitalists, and why rulers cheerlead every technological innovation no matter what the social costs for the workers who lose out, or, more irrationally from a system perspective, society’s long term stability.
Everyone can see that a society in which: a) people must buy the goods they need to survive and b) are by and large dependent upon paid labour to earn that money will enter into a fatal crisis, if c) it allows technology to replace labour without any system-wide planning to find new ways of ensuring that people can live and that social services can be funded. The history of capitalism is largely a history of ignoring the social costs of technological development and letting those workers made redundant fend for themselves and gradually die out. That would seem to be the approach that is on offer at this point, but there is a difference, or a potential difference, that means it will most likely not work. Past rounds of technological development did create new and increased demand for labour. The emergence of Artificial Intelligence threatens to break with this pattern, reducing the overall demand for labour, or at least full time workers with secure jobs.
(Some economists dispute this view and argue that technology is just an ideological excuse to draw attention away from anti-labour political choices. No doubt there is some truth to this argument, but at the same time it seems safe to at least conclude that if technology will not anytime soon eliminate all jobs, it is contributing to their continued degradation. For a clear articulation of this argument see the report from the Economic Policy Institute The Zombie Robot Argument Lurches On.)
Let us assume for the sake of argument that there will at some point in the future arise a structural crisis due to severe declines in demand for labour. This possibility helps explain recent discussion of Guaranteed Basic Income projects in some parts of the capitalist world. In the form on offer in Ontario, for example, it will be little more than the existing welfare system by another name. It will provide poverty levels of income support and keep people tied to commodity markets (rather than free public services) to satisfy their needs.
If business consultants like Martin Ford (author of two studies of the future of work that are worth reading: Light in the Tunnel and Rise of the Robots) the structural crisis of capitalism noted above is inevitable, as the technical achievements in AI become self-ramifying and abolish the need for human labour in ever more domains formerly judged exclusively human. If Ford and others are correct, (and again, they may not be, but one must plan for worse case scenarios) the looming crisis creates an opening on the left for political mobilization around creative policy responses (massively reduced hours of work without loss of real income, GBI at levels sufficient to free individuals from the need for paid labour) that will be difficult to resist, because mass unemployment always spells massive trouble for the legitimacy of capitalism. But it poses another challenge often not remarked upon on the Left, which is has its own indigenous technotopian wing.
To this point in human history, labour has been a natural necessity, a socially imposed necessity, and a source of meaning and value in human life. People had to work directly on the land to live (as in agricultural societies); they have to work in order to earn the money they need to exchange for the goods their lives require (as in capitalism), and people’s labour has made them feel like valuable contributors to the lives of other people with whom they share the world. If we are moving to a technological stage of history in which the natural necessity for human labour is abolished or seriously attenuated, then its social necessity will be abolished as well (although whether that takes a form that is in the interests of displaced workers or not depends upon the success of future left struggles). But even the resolution of that problem in the interests of workers would not solve the third, and the left needs to think philosophically about its response to the potential catastrophic loss of meaning in a world without work.
Marx foresaw the possibility that capitalist technological development would eventually do away with the need for human labour. In The Grundrrisse he welcomed it as a necessary step in the final liberation of human beings from naturally and socially coercive material circumstances. In Capital he attributed the falling rate of profit to the increase in the “organic”– i.e., technological– composition of capital. Capitalism was doomed over the long term to collapse, he thought, because it requires an increasing rate of profit that its own competitive trajectory makes impossible.
But in his early works, where he thought of labour not only as the means of producing life, but– in so far as it was non-alienated– also a means of producing meaning in life, his emancipatory vision turned not on freeing human beings form labour, but freeing labour form the meaningless forms it takes under capitalism. Thus, people would free themselves to labour in ways that were valuable for others and meaningful forms of self-creative activity for themselves. Later thinkers like William Morris continued this tradition of looking to creative, highly skilled labour as the deepest normative foundation of the struggle for socialism.
There are few William Morrises left on the left. The dominant voices tend to look to a post-work future rather than a non-alienated work future. A recent example of this vision is Nick Srnicek and Alex Williams, Inventing the Future. While it would be self-contradictory for a position like mine to deny the value of technological development (what better example is there of human intelligence and creativity than the history of science), we also must resist the intellectual pathology of projective abstraction discussed above. That is, we must remember that science and technology are not really independent historical forces and can always in principle be subjected to critical and evaluative criteria that derive from considerations of: a) what our real needs are at a given moment in history, b) whether, in light of those needs, we need to replace a given form of labour with automated systems, and c) what the costs will be if a given form of labour is replaced with an automated system, because d) that form of labour is life-valuable in its non-alienated form.
Do we really want to be treated by robot doctors and nurses? Do we really want to “learn” from on-line modules and not actual human teachers? Shall we listen to nothing but music “composed” by computer programs and read “news” compiled by algorithms? Is it sensible to replace pilots with ground based systems, given the awe that controlled flight inspires in people who want to become pilots? Do we want all of our food grown by automated greenhouses without any connection between human hand, soil, and produce? Will a world without booksellers and record shops and the conversations between devotees they enable really be richer?
The questions can be answered either way, I think, in the case of any particular form of labour. What cannot be answered either way, I also think, is the question of whether life can remain meaningful when there is nothing essentially required of us. By “essentially required of us” I mean a demand on our time, exerted by the recognized needs of others, that causes us to work, not in the first instance for money, but because we acknowledge a good in the satisfaction of the others’ need that our labour fulfills, Meaning derives from recognizing ourselves as people who can respond to the demands that others’ needs exert upon us. This form of recognition draws us out of the self-satisfaction of an ego-centric cocoon and allows us to devote some of our lifetime and life-actviity to something outside of ourselves. If that sort of devotion to the not-self is not the ethical foundation of socialism then I do not know what is.
Through non-alienated work we make ourselves real for others and contribute to the present and future of the human project. That is not the whole of what makes life valuable. We need to play as well as work; we need time for ourselves as well as others, we need to be idle as well as active, as both Sir Bertrand Russel (In Praise of Idleness) and Paul LaFargue (The Right to be Lazy), remind us. But life has to be more than game playing and amusement. Both get boring for a reason: they make no existentially compelling demands upon us. No one commits suicide because their team loses the Stanley Cup; people do commit suicide when they feel they have failed others whom they regard as rightfully depending upon them in a given instance.
What does that tell us? It tells us that people distinguish between things they have to do in life which make it unbearable if they fail, and things that are optional. We might think that life would be better without the first, but it would not, because it would be a life, not just without work, but without necessary connection or devotion or obligation to anything. It does not follow that we should not exploit technological power to free our time from forms of work that are so degrading, servile, and mundane that they choke rather than give voice to our creative abilities. It does follow that we must govern our own technological powers rather than allow them to blindly lead us into the oblivion of a society in which we have no more real need for each other.