Any general discussion around artificial intelligence (AI) should probably pay a little less attention to Skynet concerns and shift that focus toward worrying about how humans will use it.
For those not familiar, Skynet is the artificial superintelligence that is the core baddie in the Terminator movie series, which had the goal of wiping out humanity. Maybe at some point, we should be concerned about that. But what we should probably worry about, in my humble opinion, is creating the best possible AI.
AI is one of those tech genies, like widespread Internet access, social media, and smartphones that will not go back into the bottle. Banning AI would not work, even if it was possible. AI’s effects on society and culture likely will be significant, but no one can truly say what form they will take. If it is like any of the other tech genies, what actually will happen with AI will fall somewhere between the sunniest prediction and the darkest fear.
The thing about AI is it is not all Deep Fake videos and sentient phones bilking the elderly, there are actually good things that could happen with AI. The possibility of scientific breakthroughs at a much faster rate is on the table. Remarkable things driven by AI have already happened in the research field. I myself have written for my day job at a university about the remarkable efficiency that occurs when machine learning is used for reviewing and analyzing certain types of experiment data. There are also the potentially interesting results of giving people with creative minds the ability to produce visual art, animation, music, etc. without the need for technical ability.
There is also a potential dark side to AI, where the effects are negative. The loss of jobs is a real possibility, and there’s often a “cost of business” cold reaction in the corporate world to that type of inflicted suffering. Also, how do we keep AI technology from being used by bad actors for things such as terrorism?
Is it good for a few giant tech companies like Meta, Google, and Amazon to be the main disseminator of this technology? Often, they have been bad at predicting the effects of their technology. I remember when I worked at a first-wave Internet company, the idea at the time was that the Internet would completely restructure the economy. Instead, the economy absorbed the Internet and adjusted. More recently, while I worked in higher education technology (a much more benevolent world than enterprise technology), I heard conference keynotes make tech predictions that I cringe about when I think of them, such as a denial of the adverse effects of social media. America’s tech gurus have also revealed aspects of their corporate philosophy that were alarming, like when Mark Zuckerberg seemed to allude under oath before Congress in 2010 that he found the concept of privacy to be rather silly.
It would be good if the tech titans listened to the concerns, and worked within the society they exist in. This includes not following the “move fast and break things” ethos that drives so much technology innovation, as Demis Hassabis, chief executive of Google DeepMind, told the New York Times’ Ezra Klein recently on his podcast. The move-fast-and-break-things concept, which includes dismissing people who raise concerns as roadblocks to progress, had tragic results when it was applied to the Titan submarine, and also has not worked out very well for Elon Musk with Twitter. It works fine for say an app that can tell you where the best parking lots are near your location, but not great for something as dangerous as AI.
So, a lot of caution is warranted, and that includes hearing pushback and demands to slow down. The pushback is already starting in earnest. We are already seeing this in both the actor and writer strike hitting Hollywood right now, and various lawsuits over copyright and harvesting content without permission that are hitting Meta and the AI chatbot OpenAI.
The sheer amount of change that AI could unleash on our society, both good and bad, may be overwhelming for a lot of people. The United States is already facing an epidemic of deaths of despair, meaning suicide and death by drugs and alcohol. Injecting something as consequential as AI into this environment could be devastating, as people may not recognize the world they are living in 10 years from now, and therefore feel alienated. AI could develop more insidious misinformation to enrage these individuals and drive them to violence. As it gets better and better, AI may force us to rethink things like art and reconsider what makes us special, adding to existential dread.
But this grim scenario does not have to happen. AI could be a potential cure for loneliness or a way for researchers to be able to tailor better solutions to prevent these deaths. We can create AI as a boon for our society, or it can cause all of our ills to spin out of control while inventing new horrors. The solution is not to try to stop AI. The solution is to build an AI that considers all opinions, that is built out of partnerships among government, non-profits, activists, and enterprise. An AI that does great things like cure diseases and helps us shape a better world. An AI that avoids amplifying the worst traits of who we are as a people, like bigotry and greed.
If we are going to introduce an alien intelligence into our society, it might as well work with us. And to do that, we need AI that is truly human.
The last word goes to St. Vincent.