Computer science can – and otherwise should – be interpreted as a continuation of philosophy by other means. Why? Computer science and its evolution into artificial intelligence are raising existential questions. Not just about technology but about human nature itself. And the path that is leading computers toward artificial intelligence begins with mathematics. In a sense, mathematics is the language that describes the world, the physical world, a language devoid of human imperfections. This language is mathematics. Thus philosophy ends up identifying itself with mathematics, with formal logic. Bertrand Russell and Gottlob Frege, at the beginning of the twentieth century, are champions of this attempt. The story that leads to imagining and building computers is born in the vein of this line of thought. The philosopher becomes a mathematician, and the mathematician becomes a computer scientist. Turing, to whom we owe the first idea, the logical description of the computer, and von Neumann, to whom we owe the design of the computer as a ‘working machine’, are direct followers of Descartes, Leibniz, Russell, Frege, and Hilbert.
This is why computer science can be considered a continuation of philosophy by other means. Along this path, man, aware of his limits, chooses to favor mathematical, logical-formal thought over any other possible style of thought. The main feature is perhaps this: so one does not get lost in complexity, we take into account only the notion that reaches absolute clarity. Mathematics is important. But humans understand the world beyond calculations. They understand the world by accepting chaos, darkness, and the unknown, and logical calculations, have a limit. Thinking has also extended beyond to include narrative. Arguably, the calculations came first.
Written language was first developed in ancient Sumer as a tool to facilitate accounting. The earliest (known) written texts of mankind are related to accounting or record keeping. Fiction, mythology, and other narratives – and the historical text – arrived far later. Thus mankind went beyond the ‘calculating’ approach to thinking when it realized that not everything could be calculated and predicted with arithmetically precise outcomes. Uncertainty and failed expectations needed a new systemic approach: narrative thinking. What cannot be calculated must be narrated.
Therein lies the limit of information technology: it is the ultimate representation of a calculating approach to thought. One that leaves few outlets for interpretation. But the next generation of computers, those capable of artificial intelligence functions, will take thinking (and therefore philosophy) beyond mankind. Where human calculations become impossible. Each of us, in our relationship with the computer, has experienced this: the computer imposes a logic, a hierarchy, a structured form on our way of thinking. Thus, the computer has served as the machine, or the tool, through which we can acquire degrees of freedom. Access sources, connect sources, write following our thoughts, move blocks of text, or intervene on what has already been written, with a freedom that it was not granted to us by pen and paper. Yet, the computer imposes constraints and processes thought using formal logic. Human thought is forced, through the computer, to conform to logic. Artificial intelligence, therefore, will not so much force humans to conform to the computer’s logic to communicate with the machine; it will replace human thinking altogether.
From ‘machines that accompany mankind in thinking’ to ‘machines that replace mankind in thinking’. When human beings look out over that immense chaos that is the Web, looking through the search engine for answers to our questions, we are thinking that it was inaccessible to the man who thought without support from machines. The computer, a machine created to replace man, was in the fervent cultural climate of the 1960s intended not to replace human thinking; but rather, to accompany it. In this approach, humans are ‘free’ to think, and computers merely accentuate or increase that freedom, allowing for flights away from pure logic and into a deeper questioning of the human experience.
And finally, rather than removing all responsibility for thought from humans, as was envisaged (perhaps, as was hoped) believed, information technology did not succeed in lifting the responsibility for thinking from humans. Rather, through its various iterations from the Personal Computer – and all its guises starting from the desktop and culminating in the smartphone – information technology has stressed the idea that humans have full responsibility for their thinking. Despite the (justified) complaints of social critics who warn that smartphones have dangerous social effects due to their pervasive influence, they still rely on our input and our responsibility.
AI: From Thought Suggestion to Thought Imposition?
Artificial intelligence – as the evolution of information technology – will change that fundamental aspect of computing: it will gradually remove responsibility. The risk that artificial intelligence machines will eventually replace man in the capacity to think cannot be ruled out. Ongoing research can lead to machines capable of self-development, self-regulation, and self-reproduction. It is premature to indicate when this will happen – not yet, but it seems certainly possible in the near rather than distant future. Doubtless, from the 1950s onwards, there has been, through one technological; path or another, a quest for intelligence capable of functioning independently of man. It’s not a thrilling prospect, as it implies humans themselves will stop thinking. The boundary is more ethical or political than it is technological. Technology will eventually be able to achieve what are currently just fruits of our imagination.
The question is whether we, as humans, will allow the machines to extend beyond the role of supporting to replacing its human user. That is, engineers are still ‘in charge’ when they receive the elements to improve a design rather than the design itself. And then there are wider ethical and political implications, as artificial intelligence is still the product of the ideas of one or more individuals; but it is not the product of the individual user’s thinking and, therefore, it has the characteristic of imposing rather than merely suggesting.
The probable presence in the world, perhaps in the not-too-distant future, of ‘intelligent machines’, capable of interacting as equals with man, is not a probability or even a possibility. Rather, it is a problem that we, as humans, have to accept. And we must also confront it. Legislating the rights and duties of ‘electronic entities’ seems like a bureaucrat’s fantasy. And, therefore, a normal person’s nightmare. Moreover, considering what role these machines could perform and, more significantly, what potential roles they could perform, suggests that mere legal boundaries or precautions are ineffective and vague at best. Regulation is pointless. Rather, humans themselves should think more philosophically, reflecting as never before on Socrates’s simple, yet infinite, exhortation: know thyself. The inevitable intellectual (and perhaps even emotional) advancement of artificial intelligence should encourage humans to ponder, as in no other time before, who are we human beings and what is our place in the world. And in that sense, artificial intelligence can turn from something to fear to something that can help us better understand our nature and our potential: therefore, it can help humans resolve social, political, and economic problems with more confidence and effectiveness.