AI Social Impact #1

Is The Singularity Imminent?

Welcome to the inaugural AI Social Impact Newsletter, brought to you by Pez.AI. The purpose of this newsletter is to raise awareness around near term challenges facing the age of AI. We are less concerned with sensational worries of the “singularity” and more concerned about tangible threats to society. These include a widening income gap between the minority that can reap the rewards versus the majority of people that will become economically displaced, and how the bias in machine learning algorithms and AI will impact society. AI has the potential to automate the majority of jobs faster than new ones are created. AI will take on ever more important roles in our daily lives. Who will own and control the AI that we will depend on? How do we remove biases in models so that automated decision-making is objective and fair? Who is accountable for decisions that AI make?

The pace of progress and adoption is happening faster than people imagine. Whether it’s accounting, finance, law, logistics, retail, and even data science, all industries are affected. As with all scientific revolutions, we need to discuss these issues now, to prepare for this new social and economic reality. This newsletter is one step down that path.

Warm Regards,
Brian Lee Yung Rowe
Chief Pez Head, Pez.AI

 

Getting started, exploring these issues also means dissecting the so-called singularity. What exactly is it, and do we really need to be concerned? Like AI, the singularity means different things to different people. Our working definition is this: the singularity occurs when a machine (AI) exceeds human intelligence. A good overview is given by Karen Stollznow, who attended the premier singularity conference Singularity Summit. She describes both the history and influences of the singularity idea.

Those that think the singularity is real and near are the “Singularists”. Singularists are an eclectic bunch, whose ranks are filled with STEM luminaries past and present, including Bill Joy, Bill Gates, Elon Musk, and Stephen Hawking. It’s prime cheerleader is most likely Ray Kurzweil, lifelong futurist, inventor of numerous OCR and speech synthesis tools, and more recently a Director of Engineering at Google. A prime argument for believing the singularity is imminent is the continual technological progress that transcends Moore’s law, Riding this theme of dizzying progress, SoftBank CEO, Son Masayoshi, has recently jumped on the Singularity band wagon, predicting that a microchip will have an IQ of 10000 by 2047, not to mention shoes smarter than us.

Kurzweil made splashes back in 2006 saying the singularity would arrive in 2045. More recently, he’s pushed that forward to 2029! The complete timeline of his predictions for technological advances is here.

While this camp all agrees that the singularity is inevitable, they disagree on whether that is inherently good or bad. There are those that think it’s great for humanity and those that think it’s an existential threat. This divide will be the subject of a future newsletter. We’ll also see how this divide might not be as wide as imagined.


 

In an interesting twist, another co-founder of Microsoft sits squarely in the other camp. This group doesn’t necessarily think the singularity is impossible, just far enough away that there are more immediate things to worry about. I’d call this group the “Realists”, except that term is loaded with bias. Instead, rather tongue-in-cheek, I’ll call them the “Zenoists”. Paul Allen wrote a still relevant article from 2014 that picks apart most of the enthusiastic claims of the Singularists. Allen’s argument rests on Singularists’ method of extrapolation, saying that progress around cognition is different from Moore’s law (and by extension, computing). We still know very little of how the brain works, let alone consciousness. From this perspective, even twenty years seems a short time to decipher what some call the most complex thing in the universe.

Another known Zenoist is famed creator of Linux, Linus Torvalds. His view is that there will be progress in narrow applications of AI, as we’re seeing today with deep learning, but less convinced about human level cognition. In his words, he doesn’t expect to “see the situation where you suddenly have some existential crisis because your dishwasher is starting to discuss Sartre with you”. About as plausible as Douglas Adams’ existential elevators that were “imbued with intelligence and precognition [and] became terribly frustrated with the mindless business of going up and down, up and down…”

Erik Larson offers a slightly different take: humans willingly making ourselves obsolete through technology. Focusing on whether AI become superintelligent is a distraction to the bigger issue of what we are doing to ourselves. This argument has a long history. I first heard it in High School, when my calculus instructor wondered whether scientific calculators would make us dumber. Anecdotally, I’d like to think not.


So where do you stand? Are you a Singularist or Zenoist? Should we be afraid of AI or ourselves? Shout out over email or social media with the tag #AISocialImpact.


The AI Social Impact Newsletter is brought to you by Pez.AI, a socially responsible AI startup. We make enterprise bots that automate customer facing workflows and internal business processes. If you like this newsletter, sign up for future issues and please spread the awareness by sharing it with your friends or colleagues.

 
 
 
 
 
 
 

Leave a Reply

Your email address will not be published. Required fields are marked *