Future of Humanity with Artificial Intelligence

Federico Faggin, the inventor of the microchip, has been turning his gaze ever more toward and inside the human being. He has said in various interviews, in his native Italy, that his new goal, far more than improving computing capacity, is to teach people, regular humans that is, that “we are much more than machines.” Faggin’s approach offers some of the best insights into what the future holds for humanity—from a technological perspective—and, more importantly, his vision sees a continued and predominant role of all that is ‘human’. Indeed, Faggin became aware of the problem of AI in the 1980s. That’s when he started studying awareness and wondering about the possibility of building a conscious computer. Consciousness cannot be, as scientists claim, a property that comes out of matter, but rather a property of the universe that has existed since the beginning. And he established the Federico & Elvia Faggin Foundation to give support to scientists who are going in this direction. Given the recent advancements, Google’s botched demonstration notwithstanding, of artificial intelligence, popularized by such tools as Chat GPT—indeed, you might be wondering if this very essay has been written using that platform…

Faggin is trying to reverse what appears to be, in mainstream culture anyway, the perception that the future of humanity is not human: a perception that has dug its way deeper into current culture through the works of Yuval Noah Harari. He warns with some urgency of this in his Homo Sapiens: Brief History of Humankind, a view that has gained much credibility among the learned and millionaire classes—that is the people who effectively have some say in the future—that humanity risks being taken over by technology just as some of the great works of science fiction have predicted. Today there is much speculation about a possible future in which humanity will be overtaken or even destroyed by machines. We hear about self-driving cars, Big Data, the resurgence of artificial intelligence, and even transhumanism, the idea that it will be possible to download our experience and awareness into a computer and live forever. We also read statements from notables like Steven Hawking, Bill Gates, and Elon Musk about the dangers of robotics and artificial intelligence. So what is true and what is false in the information we receive? The various projections, says Faggin, take it as the fact that technology will evolve to create autonomous and intelligent machines in the not-too-distant future: that it is machines that are equal if not better than us. Rather, Faggin believes this assumption is fundamentally flawed. He believes that true intelligence requires consciousness, and that consciousness is something our digital machines don’t have, and never will.

Harari suggests that millennia of human evolution have seen political, economic, religious, and technological upheavals but one thing has remained the same: the human being; now what tomorrow seems to offer us is an augmented human being, more like a god than a man. What Homo Sapiens has done to nature (by taming, exploiting, destroying, or enhancing) it could do to itself from now on. As a result of new technologies, such as robotics, artificial intelligence, and genetic engineering, the man of the future could be more Deus than Sapiens. Whether you agree more with Faggin or Harari, the world has entered the dawn of artificial intelligence, the one technology that can destroy the illusions about the goodness and neutrality of technology. Yet, most of the global problems today are scientific or framed as such, which explains why the average citizen has become more concerned about the amount of information and the difficulty in finding reliable and rational sources that can help them make decisions for themselves. Nevertheless, the artificial intelligence application of machine learning, that is a machine’s ability (through algorithms, computers, or robots) to teach itself by processing almost infinite quantities of data and information has already established itself as fact. 

AI programs and tools have already beaten humans at chess or learned to compose a symphony, learning and adapting the style from none other than Ludwig van Beethoven. In the face of such evidence, predictions of machines that learn by themselves and program each other making human beings utterly useless become rather credible. And chances are, today, you dear reader, have asked Siri (Apple), Alexa (Amazon), or Google Now (Google) to choose music to play, for the best directions to avoid traffic, or for reading suggestions. Certainly, even in a less ‘advanced’ version of such a scenario, the increasing fusion between human intelligence and artificial intelligence—and our lack of control over the speed of this evolution—is inevitable. If all this sounds like fantasy, consider that even Isaac Newton would have struggled to believe that humans would one day orbit the earth in a man-made machine, or even dare to reach the moon and land probes on other planets. The airplane itself would have seemed utterly incredible to him.

Regardless of where AI leads humanity—vegetable state or in a more optimistic, Faggin scenario limited to a tool—the seemingly inevitable path of technological evolution depends on microchips that will likely try to mimic the human brain. It’s no wonder that microchips or more accurately semiconductors have become such crucial drivers of geopolitics. Their ubiquity in cars, computers, smartphones, and—crucially—supercomputing and armaments have made these fundamental assets for corporations and governments alike. Their relative scarcity is ensured by the fact that the associated manufacturing complexities ensure that few countries can produce them. The fact that most of these countries are in Asia explains to a great extent the intensifying geopolitical confrontation between China and the USA. It also explains why Europeans, who led much of the research involved in the foundations of computing from Alan Turing to Olivetti and Mario Tchou in Italy in the 1950s and 60s, have suddenly realized they have to catch up. And it seems difficult to see how Europe could regain that supremacy.

In the US, IBM has been leading the quest for AI and quantum computing. And last November it announced the launch of its most powerful quantum computer to date: the Osprey, a 433 Qbits machine, three times as powerful as its previous quantum machine, the Eagle. The number of qubits, or quantum bits, measures the power of the quantum computer, just as for PCs the power is measured by the GHz of the processor, and therefore based on the billions of operations or cycles per second, which it can always process in sequential mode. At this point, IBM’s next goal will be to reach 1,000 Qbits, as IBM’s director of engineering research, Dario Gil, recently stated. Of course, such computing power will require ever more powerful microprocessors. And Nvidia appears to have a lead with its latest line of AI-ready processors – presented in 2020 – the eighth-generation A100 GPUs based on the Ampere architecture. According to Nvidia, these have made an enormous generational leap, performing 20X faster than the previous Volta architecture.