Close observers of the AI development community may have noticed a particular idea discussed within the field that goes beyond workflow automation and data analytics into something much more esoteric: the
The precise definition can be hard to pin down, as it can mean different things to different people. But in general, the Singularity refers to the point in time when an AI system gets so advanced that it becomes capable of improving itself without any human guidance, which leads to even more advanced AI that itself improves further, which then yields AI even more advanced than that, on and on, faster and faster, each iteration in this cycle producing greater results than the last. Given enough time, according to this theory, AI could develop into an inscrutably alien "superintelligence" that is as far above humans as we are above mice, cockroaches or even bacteria. This superintelligence would then usher in an entirely new era of existence, one where humans are no longer the dominant intelligence on this planet.
Depending on who you ask, what happens next could be a technological utopia free of material scarcity, where disease and suffering and possibly even death itself are mere relics of a less enlightened past; a dystopian nightmare where humanity, if it is allowed to exist at all, loses all agency and lives in the shadow of an all-powerful digital god that is indifferent to us at best, actively hostile at worst; or something in between, such as a complete blending of human and machine intelligence to the point where the distinction between the organic and synthetic becomes meaningless and we, ourselves, collectively shed everything that once made us human. What outcome we get, according to the theory, depends on how well we humans are able to align early AI with the right goals, preferences and ethical principles. Which goals, preferences and ethical principles are, of course, the source of much debate within the community.
Overall, those who believe this is possible, let alone desirable, are in the minority, even within the tech sector, where the idea finds the most support. Despite this, it remains a topic of conversation among prominent players in the AI space such as futurist and Google AI research head
In this, the third and final part of our series, our experts ponder the question:
"Do you believe AI can eventually lead us to a technological singularity that produces a superintelligence that brings civilization into an unprecedented new era that fundamentally alters what it means to be human? And if so, is such a state something that should be actively pursued by society?"
You can read the