AI Thought Leaders : The Singularity

Close observers of the AI development community may have noticed a particular idea discussed within the field that goes beyond workflow automation and data analytics into something much more esoteric: the Singularity

The precise definition can be hard to pin down, as it can mean different things to different people. But in general, the Singularity refers to the point in time when an AI system gets so advanced that it becomes capable of improving itself without any human guidance, which leads to even more advanced AI that itself improves further, which then yields AI even more advanced than that, on and on, faster and faster, each iteration in this cycle producing greater results than the last. Given enough time, according to this theory, AI could develop into an inscrutably alien "superintelligence" that is as far above humans as we are above mice, cockroaches or even bacteria. This superintelligence would then usher in an entirely new era of existence, one where humans are no longer the dominant intelligence on this planet. 

Depending on who you ask, what happens next could be a technological utopia free of material scarcity, where disease and suffering and possibly even death itself are mere relics of a less enlightened past; a dystopian nightmare where humanity, if it is allowed to exist at all, loses all agency and lives in the shadow of an all-powerful digital god that is indifferent to us at best, actively hostile at worst; or something in between, such as a complete blending of human and machine intelligence to the point where the distinction between the organic and synthetic becomes meaningless and we, ourselves, collectively shed everything that once made us human. What outcome we get, according to the theory, depends on how well we humans are able to align early AI with the right goals, preferences and ethical principles. Which goals, preferences and ethical principles are, of course, the source of much debate within the community. 

Overall, those who believe this is possible, let alone desirable, are in the minority, even within the tech sector, where the idea finds the most support. Despite this, it remains a topic of conversation among prominent players in the AI space such as futurist and Google AI research head Ray Kurzweil, OpenAI leader Sam Altman and Elon Musk, among others. But what do our own AI experts think? 

In this, the third and final part of our series, our experts ponder the question:

"Do you believe AI can eventually lead us to a technological singularity that produces a superintelligence that brings civilization into an unprecedented new era that fundamentally alters what it means to be human? And if so, is such a state something that should be actively pursued by society?" 

You can read the first part here and the second part here.  

Davis Bell

CEO, Canopy
Davis Bell
Wow! Getting very 3 a.m. dorm-room-conversation here. I believe AI is going to get much, much smarter and more powerful than it currently is. I believe that the applications that leverage this smarter, more powerful AI will dramatically increase the effectiveness of the people using them. But there's still a big gap between that, on the one hand, and on the other hand AGI or the singularity or whatever you want to call it. It seems the progress from model to model is already slowing. And there are constraints we may run into in terms of energy and training data. So who knows where we cap out and whether we get to the singularity. But if I had to guess, I'd say no. As far as whether we should pursue this, I'd say it doesn't matter — we're going to.

Jim Bourke

Managing Director, Withum Advisory Services
Bourke-Jim.jpg
Jim Bourke
Wow, that is a great question and makes you wonder about the true potential power of AI.

I do believe that a path to technological singularity is a possibility, but given the current limitations associated with chip development and production challenges, don't look for that happening anytime soon. I do believe we will start to see this shift seven to 10 years out and with this shift, we will then be challenged with how to "control" such advancement.

I also believe that we should pursue this path, but it will be extremely slippery. I am concerned about the ethical challenges and possibly the unintended consequences of these major advances. Having said that, I do believe that the potential benefits of advancements in this space will far exceed the potential challenges that we will face.

Samantha Bowling

Managing partner, GWCPA
Bowling-Samantha-Garbelman Winslow
Bonnie Johnson
While the concept of a technological singularity is fascinating, I believe it remains speculative at this point. Rather than focusing on creating superintelligence, society should prioritize developing AI that enhances human potential, aligns with ethical values and addresses pressing global challenges. As we continue to adapt to doing more with fewer people, AI offers an opportunity to reimagine workflows, improve efficiency and empower teams to focus on higher-value, strategic tasks. By embracing AI as a partner rather than a replacement, organizations can drive innovation while maintaining human oversight. The key lies in adopting AI responsibly, ensuring transparency, mitigating biases and upholding data security. In accounting, for example, AI can streamline compliance, identify risks and uncover insights that were previously out of reach. Striking the right balance between innovation and responsibility will not only ensure AI's long-term success but also its role in advancing humanity's collective potential.

Ted Callahan

Director of partnerships and strategy, QuickBooks Partners Segment, Intuit
Ted Callahan
Ted Callahan declined to answer this question.

Daren Campbell

Tax technology and transformation leader, EY Americas
Daren Campbell
With AI, there will be a convergence but whether we reach a technological singularity is hard to say. The advancements of AI and technology will bring us closer and closer to the idea of a technological singularity which will fundamentally alter what it means to be a human.
It's similar to the idea of an asymptote; if you look at a big picture it seems like you're getting really close to this point, but if you zoom in at any of those points, there's still a gap.

Jack Castonguay

Vice president of strategic content development, Surgent
Castonguay-Jack-Knowfully Learning Group
Look how far AI has come in just the past few years. I think it's only a matter of time until we reach singularity. It's a question of when, not if. It is also one of those things that we need to stop and ask ourselves, just because we can do it, should we? There was a study that found that 10% of people working in AI believe it has the potential to destroy the world. 10% is a terrifyingly high number to me. It's why AI draws so many parallels to the making of the atomic bomb. We are likely to see incredible societal breakthroughs because of AI — one day it will likely help us cure cancer and model proteins leading to even more cures. But what will the cost be? Will it start and help dominate wars to maximize profits for a nefarious company or resource-scarce country? With singularity, if it is capable of greatness, it is also capable of mass devastation, just like humans. It will be the ultimate tradeoff. It could solve our most complex problems, or it could end us. If we pursue it, we better get it right.

Danielle Supkis Cheek

Head of analytics and AI, Caseware
Cheek-Danielle-PKFTexas.jpg
I think the concept of technological singularity makes for great movies. Regulatory frameworks, market competition and the fundamental nature of diversity of different kinds of technology within AI make a true technological singularity highly improbable, especially within our lifetime. 

Ellen Choi

Founder and CEO, Edgefield Group
Ellen Choi
Yes, I believe AI could eventually lead to this technological singularity. This outcome seems inevitable given humanity's relentless drive for innovation and progress, especially within the framework of capitalism, where competition and the pursuit of outsized gains fuel rapid advancements.

The race toward superintelligence is unstoppable; whether through regulated channels or in the shadows, progress will continue as individuals, organizations and nations vie for dominance. Even with regulations, loopholes will be exploited, and innovation will persist, often prioritizing immediate rewards over long-term considerations.

The challenge, then, is managing this trajectory responsibly. The singularity represents both unparalleled opportunity and unprecedented risk. To navigate this, society must find ways to align incentives, prioritize collective action, and establish guardrails that balance progress with safety.

Sergio de la Fe

Enterprise digital leader, partner, RSM US LLP
Delafe-Sergio-RSM.jpeg
Sergio de la Fe
While the idea of AI leading to a technological singularity and creating superintelligence is fascinating, I believe that in the context of the accounting industry, the focus should be on how AI can enhance human expertise, improve efficiencies and drive innovation—not on a potential future where AI surpasses human intelligence. The idea of AI fundamentally altering what it means to be human is more philosophical and speculative, especially in industries like accounting where human judgment, ethics and oversight remain crucial. Yet, superintelligence is inevitable. Over half of the world's 50 most influential people are invested in advancing AI, and many of the brightest minds are tackling its biggest challenges. Unless there's a major shift in investment or priorities, progress will continue. The most likely outcome is the development of artificial general intelligence, and those with the infrastructure to leverage it will reap the rewards first. Whether we reach AGI or not, its potential benefits are enormous, and we must be prepared for ongoing technological progress.

Avani Desai

CEO, Schellman
Desai-Avani-Schellman
No, no, no. People said the same thing when the internet came along—that it would fundamentally change humanity—and look where we are. I still have to take my trash out! But I do think humanoid robots will eventually help with tasks like that, and I'm all for it. Still, the idea of a singularity seems a bit far-fetched. I think we should focus on using AI to enhance, not replace, what it means to be human. Let's solve real problems instead of chasing sci-fi scenarios.

Pascal Finette

Founder and CEO, Be Radical
Pascal Finette
As long as humans feel the need to hate and kill each other and keep destroying the only planet we can call home, I honestly don't care about the longstanding debate about the Kurzweilian singularity. Let's focus our energy and effort on fixing what is in front of us — and figure out how technology (regardless of whether it's superintelligent or not) can help us with this.

Prashant Ganti

Vice president of global product strategy, development and alliances - Enterprise Finance Suite, Zoho
Prashant Ganti
First, let's clarify what intelligence entails. As I see it, intelligence involves understanding the world around us, which encompasses both the physical and sensory aspects. It includes the ability to remember information and recall it when needed, as well as the capacity to reason and plan. I'm not convinced that AI can perform any of these tasks particularly well now, nor do I believe it will achieve advanced levels in all these areas in the future. 

Let's consider a thought experiment: How would a language model from before the invention of the airplane respond to questions about flight? Such a model might have concluded that human flight was impossible based on: 

a. Historical attempts at building flying machines. 

b. The weight of the heaviest bird capable of flight. 

If the Wright brothers had relied on such an LLM's prediction, we might not have airplanes today. 

However, this doesn't diminish the utility of AI or its potential impact on humanity. AI is undoubtedly useful and will affect us all. It's poised to replace certain jobs or at least automate specific tasks within jobs. AI agents will influence particular roles, potentially altering the premium placed on certain skills, leading to some traditional jobs disappearing entirely. For instance, back in the 1600s, there were job advertisements for "computers"—actual human beings who computed numbers.

Mike Gerhard

Chief data and AI officer, BDO USA
Mike Gerhard
I prefer not to speculate on the concept of a technological singularity leading to a superintelligence that fundamentally alters what it means to be human. However, it's crucial to keep people at the center of our pursuit of AI innovation, focusing on developing solutions that enhance human capabilities rather than overshadow them. I firmly believe that AI developments must align with core human values such as accountability and transparency.   Prioritizing ethical considerations and establishing robust frameworks to guide the responsible advancement of AI technologies is essential. 

Chris Griffin

Managing partner of transformation & technology, Deloitte & Touche LLP
Chris Griffin Deloitte
People have been developing technology throughout the course of human history—from irrigation systems used in early farming methods, to the creation of the mechanical clock. When you think about AI in the context of human history and development, the creation of increasingly advanced AI is going to happen. There is no denying it. While the potential benefits are immense, the risks and ethical challenges are equally significant. Robust ethical frameworks will be critical to ensure that AI development is aligned with human values and safety.

Aaron Harris

Chief technology officer, Sage
Aaron Harris
I'm not sure the future will resemble the "singularity" where humans and tech merge into one single super intelligence, but AI will certainly dramatically change civilization as we know it.  I'm more concerned about whether the benefits accrue to all or whether they create a bigger rift between the haves and have nots. It's up to us to build a future where AI benefits everyone.

Wesley Hartman

Founder and CEO, Automata Practice Development
Wes Hartman 2
Sonia Alvarado
Humanity has been great at taking science fiction and turning it into science fact. It remains to be seen if we end up in Star Trek or The Matrix. It was just over 100 years ago when the first airplane took off with the Wright brothers and now millions of people fly all over the place. The internet has expanded the connections and speed we can communicate. History is full of these seismic changes. These changes have had positive and negative impacts on humanity. I believe it could happen, but I think the meaning of being human has been constantly changing. Even after we reach that point, the question we would have is: "What now?" I think humanity will find new meaning. And like all pursuits of technology, it is important to remember to keep the humanity in positive [perspective].

Joel Hughes

CEO, Rightworks.
Joel Hughes
We believe the evolution of AGI (artificial general intelligence) technology is inevitable. This profound transformation has vast and complex implications since AI holds the potential to significantly reshape the way we work and produce, pushing us beyond the established mindset and patterns that came from the Industrial Revolution. Ultimately, the journey toward superintelligence will make us reconsider fundamental aspects of our existence and society, such as what it really means to be human and where we derive or assign value. 

Our current societal structures and ethical frameworks might not be prepared to handle these philosophical and practical challenges. To move things in the right direction, we need to actively pursue this goal, but we also need to prepare for what comes next. This path offers unprecedented opportunities and significant challenges which demand careful consideration and global cooperation. Our greatest hope is that when AGI becomes a reality, it will be controlled by leaders of character and integrity, ensuring its responsible and ethical advancement.

Kacee Johnson

VP of strategy and innovation, CPA.com (an AICPA company)
Johnson-Kacee-CPAcom NEW 2022
The idea of a technological singularity, where AI achieves superintelligence and ushers in a new era of civilization, is both fascinating and deeply complex. While it's theoretically possible, the timeline and pathway remain highly uncertain, and opinions on its desirability vary widely. We had a keynote speaker at an Executive Roundtable dive into this and while I see how it could solve existential challenges, it also raises profound risks and ethical dilemmas. 

As to whether it should be actively pursued, the answer depends on how society manages the development of such a transformative technology. Pursuit without caution or oversight could lead to unintended consequences. It's not just about whether we can achieve it, but whether we should and how we ensure it aligns with humanity's greater good.

Jenn Kosar

US AI assurance leader, PwC
Jenn Kosar
No, AI is built on human input and generates outputs based on the past. It cannot predict the future. There's no question that AI is having, and will continue to have, a significant impact on the business landscape and society in general. However, we believe it's more valuable to approach this technology as a fundamental new tool for enabling human ingenuity rather than speculate or over-sensationalize about what AI "might" do.

Thomas Mackenzie

KPMG US and global chief technology officer, KPMG
Thomas Mackenzie
I am not sure, but I know we will still need fully human auditors exercising professional skepticism! 

Blake Oliver

CEO, Earmark
Oliver-Blake-2024.jpg
Blake Oliver
Yes, I believe that AI could eventually lead us to a technological singularity. Human-computer interfaces are something that companies like Neuralink are actively working on, and I wouldn't be surprised if someday humans and computers physically merge in a way that changes what it means to be human. This would be a natural step along the path of medical technology development we've been following for decades. Many people are walking around today with artificial organs. They are cyborgs. I have personal experience with this. My son has cochlear implants — he was born completely deaf, and his implants allow him to hear. That technology works via a computer-to-auditory nerve interface. It's not difficult for me to imagine a two-way computer-brain interface being available in the next decade or two. 

Adam Orentlicher

Chief technology officer, Wolters Kluwer
AdamOrentlicher.jfif
While the possibility of technological singularity raises interesting questions, I believe we should focus on both immediate applications and thoughtful long-term development of AI. In tax and accounting, we're seeing AI excel in optimizing firm operations, elevating professionals, and enhancing client service.

Current AI is remarkable at specific tasks like anomaly detection and process automation, but it lacks nuanced judgment needed for complex decisions. Its true value lies in eliminating mundane work while enabling professionals to focus on strategic advisory services.

That said, we shouldn't dismiss the potential for transformative AI advances. As a technology leader, I see our role as steering development toward augmenting human intelligence while carefully considering the ethical implications. This means building systems that enhance professional judgment, improve client outcomes and maintain rigorous standards — all while staying mindful of where this technology might lead us.

Rather than pursuing superintelligence as an end goal, my opinion is we should focus on responsible innovation that preserves the essential human elements of professional services while embracing AI's potential to reshape our capabilities. The future isn't about replacement, but about thoughtful enhancement of human expertise.

Abigail Parker-Zhang

Accounting professor, University of Texas at San Antonio
Abigal Zhang
I believe technological singularity will arrive at some point in the future. However, I don't think it is a state that should be actively pursued by society.

Hitendra Patil

Founder and CEO, Accountaneur
Hitendra-Patil-AccountantsWorld
Thinking that AI might achieve singularity with superintelligence feels like a recipe we have never heard of, with ingredients we've never seen, and cooking in an oven that works on nuclear power. We are not yet ready to taste what might come out of this alien kitchen, yet here is what I think:

If AI reaches a threshold of singularity, where it can improve itself every millisecond, then we are looking at a future wherein AI could eerily outsmart us or replace a lot of what we currently do in accounting. Will there be a decent RoI for such AI systems if people aren't willing to pay much higher fees for accounting? 

Should we pursue it? It should be analyzed just like a high-stakes investment. The upside of such a solution to humanity's most challenging problems could yield a better quality of life. However, unprecedented ethical liabilities and job losses might cause greater inequality and the destruction of economic fundamentals. The infinite human potential cannot be allowed to regress to the pre-medieval era of "food-shelter-survival" aims of life. Perhaps the time has come to build a modern-age Noah's Ark — for what happens when artificial intelligence goes berserk. 

Enzo Santilli

Principal, Grant Thornton Advisors LLC
Enzo Santilli
There will be advances where society will increasingly accept larger bodies of integrated work to be done by AI, marking the rise of Agentic AI, which is only in its infancy. Will it fundamentally alter what it means to be human? I don't think so. Since the Stone Age, mankind has developed tools to do what humans could not do with their own hands. From the Industrial Revolution, where tools morphed into machines, to the Information Age, where knowledge trumped physical assets. Now, AI enables end-to-end process work without human hands touching it, even if we still prefer having a human in the loop. With humans being humans, there will always be the need to have personal interactions — to travel, to touch and to see — and civilization won't allow AI to be a cheap substitute. The pent-up demand for experiential engagement that occurred after coronavirus lockdowns is a clear sign of this enduring need.

Doug Schrock

Managing principal of artificial intelligence, Crowe LLP,
Doug Schrock
The first part, yes (singularity and unprecedented new era).  But it won't fundamentally alter what it means to be human.  In the long arc of history, this will be seen as a pivotal time and we will ultimately emerge stronger as a society. However, the arc of history does not worry itself with the impact on the particular generation experiencing the disruption. It is likely to be a tumultuous period as we advance to this next stage of progress.   

Eitan Sharon

SVP of data and science, Xero
Eitan Sharon
That's an intriguing question and something I think about in my spare time. While it's interesting to think about futures like that, it's also really important to not lose sight of the fact that there are benefits that we can and must realize today for our customers.

As we push the boundaries of AI, it has the potential to do infinitely more for accountants and small businesses. My focus is on putting the best of available AI technologies into the hands of Xero's customers so they can flourish. 

Donny Shimamoto

Founder and managing director, IntrapriseTechKnowledgies
Donny Shimamoto of Intraprise Techknowlogies
I don't believe the singularity is possible, mainly because AI only works based on what it is taught, and while it may hallucinate, that's not the same as actual creativity. I do worry that overreliance upon technology (that would be run by AI) will turn humans into lazy pampered slugs—and that would be a reason not to pursue that level of automation in our lives.

Sean Stein Smith

Accounting professor, Lehman College
Stein Smith-Sean-Lehmann College 2022
I think the singularity is definitely a possibility, although not as quickly as some advocates would have you believe. Any progress or development that can ensure a transparent and objective future for AI-human integration should be pursued, especially since AI leadership given such an event will be of incredibly high importance. 

Vsu Subramanian

SVP of content engineering and head of AI, Avalara
Vsu Subramanian.jpg
I don't think we're close to that level of intelligence. That still seems more like science fiction to me at present. I believe AI will continue to advance and will have an effect on civilization, just like the mobile computing, internet and earlier advancements we have lived through. AI's impacts will likely be seen at a faster pace than prior advancements. I also believe humans have a capacity to adapt and change. There will be many new advantages to be gained and there will be risks, downsides and misuse with AI, just like all the previous technology advancements. I don't think we can unilaterally stop such advancements in technology when this development is happening worldwide.

Eyal Shinar

Co-founder and CEO, Black Ore
Eyal Shinar
No, I don't believe current AI technology will lead to a technological singularity or superintelligence that fundamentally alters what it means to be human. Current AI systems show extraordinary pattern recognition, but not yet true reasoning or consciousness — they're amplifiers of human knowledge, trained on and bounded by human-generated data.

This 'limitation' is actually what makes AI transformative. Rather than creating a digital superintelligence, AI is democratizing expertise — letting anyone leverage PhD-level analysis in their daily work. When millions can access expert-level insights, human innovation accelerates dramatically.

I might be a contrarian, but this time, it's not different. Like the internet in the '90s or smartphones in the 2000s, AI isn't creating a post-human future — it's amplifying human potential. The real transformation isn't about creating digital gods, but about elevating the collective capabilities of humanity itself.

Prasad Sristi

Chief AI officer, Ascend
Prasad Sristi
I don't think we will ever fully finish defining what it means to be a human. We constantly seek to find meaning in our work and the nature of that work has always been changing. If you zoom out a bit, you'll see we went from hunting and gathering to agriculture to industrialization to the knowledge economy. Those transitions have never made human life less meaningful, even though at the time, we probably put our entire meaning into that activity. I have no idea if or when we will achieve general artificial intelligence or super intelligence. The models continue to get smarter and folks who are at the center of AI innovation are saying that they haven't seen a cliff yet. I see continued benefits to the accounting profession because of more powerful models. 

Ben Wen

CEO and co-founder, Tallyfor
Ben Wen, Tallyfor
Yes to both. As a species, humanity is compelled to self-improvement for the betterment of the next generation. In short, because we love our children, we strive.  A new physics theory called Assembly Theory shifts the perspective from individual to lineages. This theory is described by Prof. Sara Imari Walker in Life as No One Knows It. That perspective considers each of us as part of a lineage of both our genes and the work we do. Like a black belt who recites the long lineage of teachers that taught her teacher, our intellectual lineage is an assembly of our learnings. That lineage is rapidly becoming woven with non-human assisted intellect, a continuum of the singularity if you will. A coarse version of the singularity exists already. I personally can't remember appointments or navigate anywhere without my iPhone. My watch tells me to stand and to breathe. I comply. Perhaps I am less capable of reading a paper map than the pre-iPhone me. Fine, so be it. That time and energy formerly used to read paper maps (and fold them back correctly!) is used for better purposes and for watching YouTube.

David Wood

Accounting professor, Bringham Young University
Wood-David-Brigham Young
When I talk about AI with various groups, it's amazing how quickly the conversation goes from the technical to the moral and philosophical, like this question. I think that is a good thing! No, I don't think AI will fundamentally change what it means to be human. It will change what we do and how we do it, but not fundamentally who we are, which is children of God with divine identity and purpose. To me, AI is an accelerator that will help good people to be better, and bad people to be worse. All of us will have to decide how we will use it and hopefully the majority use it for good.

For example, I believe AI has the power to dramatically improve education. It does not change the need humans have to learn and to develop but can change how we learn and what we need to learn. Still, the fundamental need for improvement in both intellectual and moral ways is critical for us as a species. AI does not change this fundamental need, but can alter how we get there and give us new possibilities.

Joe Woodard

Founder and CEO, Woodard
Joe Woodard.png
Joe Woodard
I believe AI will imminently outpace humanity in areas like mathematics, computer programming, medical research, engineering, economics and data analytics, including predictive analytics. I also believe AI will soon exercise these abilities with some level of autonomy, regardless of how diligently we may work within the spheres of regulation and programming to prevent this autonomy. In other words, it will eventually break its bonds. 

I believe AI will alter the human experience individually and societally. However, I do not believe AI will fundamentally alter what it means to be human. Instead, I believe AI will eventually liberate us to be more genuinely human. Unfortunately, I believe, as did Gene Roddenberry, this liberation will only take place after humanity has endured a global and disruptive adjustment period economically, and perhaps militarily as well. In other words, AI will usher in a new "dark age" from which humanity that will eventually transition into a new, technology-driven "enlightenment."

Carmel Wynkoop

Partner-in-charge of AI, analytics & automation, Armanino
Carmel Wynkoop, Armanino
Photo Credit - Robert Houser
I'm skeptical. People said the same thing about the internet revolutionizing humanity—and sure, it's changed a lot, but we're still stuck with the same human problems, like raising good kids! The singularity feels more like a philosophical distraction than a practical goal. Instead, let's aim for AI that works for us—helping solve real challenges, from streamlining work to tackling global issues. If AI can make life easier or more meaningful without erasing our humanity, that's the kind of progress I'd love to see.
MORE FROM ACCOUNTING TODAY