AI: Brave New World Or Conjuring Demons?

image of a brain on a computer circuit

If you read my stuff here regularly, you know that I had a front-row seat (well, maybe fifth-row seat) to the first Internet boom and bust in the late 1990s into the early 2000s.

I remember being online by the mid-1990s, and people who were not online in those early days had little understanding of the Internet. Some thought it was a kind of video game, some dismissed it as a passing fad. Then, as more people became connected, there were a lot of bold predictions for the Internet. There was a lot of talk of it completely revamping the economy, of creating a world of global understanding because people would learn more about each other. There were also people raising many concerns about potential ill effects.

So, I have seen a game-changing technology in my lifetime, and given I worked in that world, I had a perhaps more informed understanding of it. But I do not think any of us have seen anything close to artificial intelligence (AI).

We are at the point right now where AI is very little understood, sort of where we were in the mid-1990s. This video gets lots o’ chuckles today, but where we are at right now with AI is where we were in 1994:

That has it all, and I do not mean 1990s women’s hairstyles. Professional Insufferable Human Bryant Gumble’s puzzled, skeptical annoyance; Katie Couric’s trying to reason with what the @ means; Elizabeth Vargas blowing Couric’s mind when she said she didn’t think the Internet needed phone lines (true to a point in 1994)…that was where we were at then. It would be very hard to imagine the world in 2023 back in 1994, and that is not that long ago.

And so it is with AI. Even the people developing AI applications do not have a real understanding of how it might affect us. That is rather frightening, to an extent, because you are basically, as Ezra Klein notes in his excellent column on AI, opening a portal or walking into a dark cave with no idea what is on the other side:

We typically reach for science fiction stories when thinking about A.I. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.

It is probably quite tempting to be dismissive, to come back with “well I’m not really a tech person” or “I don’t even own a computer” or any of a number of thought defense mechanisms that some might conjure up to rationalize ignoring all of this. But that did not work in 1994, after a decade it was almost impossible to completely avoid some sort of Internet technology, and within two decades the Internet had, for better or worse, reshaped all of our lives. And here is what is both so disconcerting, and legitimately exciting, about AI. It is moving very, very fast.

For example, look at perhaps the most famous current example of AI, OpenAI’s GPT-4, the fourth version of its language tool. The previous model struggled to pass a bar exam, placing in the 10th percentile. The new version passed it at a remarkable 90th percentile. It scored in the 88th percentile on the LSAT, up from 40th percentile.

That is just one example, but it clear that this technology is progressing at a rapid rate. This is important for two reasons. First, any assumption made about AI’s effects cannot be based on the current version of any application of it. You have to think about what it might be like in five, 10 years. And second, we tend not to handle rapid change very well.

Two examples are COVID and social media. We are still reeling from the effects of COVID, and it is pretty clear that in America, we did not do well on that particular test. With social media, it came on the scene in earnest in 2007. Within a decade, it became clear that giving humans social media was in many ways like giving a five-year-old a sugary bowl of cereal then a pack of matches and a can of gasoline.

To be clear, there really have been some good things to come out of social media, ranging from being able to find old friends on Facebook to how Twitter has been used to empower disadvantaged communities. But the damaging effects are pretty clear. I wish I could find the Tweet to credit the person who said it but this particular quote stuck with me:

We have given everyone in the world a platform to say whatever they want at any time – What’s plan B?

The release of technology that we do not understand, that can “think,” into our society is a real Pandora’s Box that once it is released it will be hard to get rid of. There is no way we can eliminate social media, for all its ills. AI would be much more difficult to remove if it causes catastrophes.

What form these catastrophes would take are obvious in some ways and not so obvious in others. One issue is we likely would be caught off-guard by them. The public has been trained to think of the dangers based on the Terminator movies, but that is a complete misread, killer robots would not be our first problem. Also, many people have a limited understanding of science. This quote by atmospheric science professor Marshall Shepherd was something I thought of exclusively when it comes to the public’s often poor understanding of weather and climate change, but now I think it applies to understanding AI:

Threats such as AI-designed scams and manipulative misinformation would not be thought of as a threat, nor recognized, by people not familiar with how AI works. That would be most of the population given the complexities of AI.

Other threats from AI are made worse because of, well, capitalism. An AI trained to maximize profits for a particular hedge fund company would wreak real havoc on our economy, and even our society. Powerful lobbyists would work to deflect any potential regulation that would threaten profits, and that lobbying strategy could potentially be made much more effective by AI. We are already data-mined to death by our social media, just imagine AI designed to manipulate human behavior with the goal of making the most money for that company.

There are also the other specters that loom, such as what might happen when nefarious actors get their hands on better and better AI tech. Russia’s 2016 election interference has its roots in troll farms that succeeded in causing panic in Louisiana on Sept. 11, 2014, when it made people believe there was a huge chemical fire with dangerous fumes. There wasn’t. We are, of course, not immune to how we might use AI on vulnerable less-developed nations, given our history. Then there are scammers. We already have incidents where Deep Fakes are used to make people believe loved ones have been kidnapped. And if you thought online misinformation is bad now, just imagine AI-designed misinformation that can be created in seconds.

There is also the threat to jobs and livelihood. Pretty much any job that can be done remotely, from IT support to coding to writing to design to some assistant jobs likely will soon be done as well or better by AI than humans.

Listen, not all of this is doom and gloom. AI from Meta has modeled 700 million proteins, a herculean scientific feat that would normally take years for humans to pull off and holds great potential for medicine. I myself have written about how AI can make huge scientific strides here and here. We could be looking at a cancer cure within this decade. We could task AI with the job of solving the climate crisis in a way everyone could agree on, potentially saving millions of lives and creating a better way of life for everyone.

Also, the potential for job losses may not play out as we think. Recently, when auto-driven vehicles were all the rage, the truck driver was a job that was going to disappear starting now, but things like insurance liability and ethics (think road versions of the Trolly Problem) slowed that way down. Jobs may survive and AI becomes your own little personal assistant that you have to guide and manage. If you are an IT support person, part of your job may be either shutting off the AI so you can work or talking to the AI about how to solve a malfunction within its system, kind of like a doctor with a patient.

In any event, while this is not a guarantee, the odds are pretty good that we may have a world that overall, looks nothing like the world we have now. The movie “Her” was about a man who fell in love with his AI computer operating system, and we are very close to that. AI-driven mental health therapy is being tested now. AI-enabled engineering could create and put systems in place 10 years from now that we can’t envision now. AI can write better code, and there is no reason why they can’t do that for themselves, and then we have self-replicating and self-repairing AI systems.

All of this is both exciting and kind of scary, and a lot of it is incumbent on how we prepare for it. The Y2K Problem only was a big nothingburger because scores of coders fixed the code – I worked with a man who did Y2K code fixing for a major bank as a side hustle, and he said if they just let it go on January 1 everyone’s loan would be considered way overdue and canceled. Imagine that chaos.

The difficulty here is AI is moving so fast we have little time to prepare. Despite its looming importance, there is next to no talk in political circles about any needed laws, mainly because politicians often look really dumb talking about technology and many politicians do not like to look weak and stupid. There is really no way to avoid it. Local politicians, employers, employees, retired people, etc. need to begin to pay attention because there is a definite chance it changes a lot of things in how we all operate. We also do not really know if we can control AI enough so if things do go very wrong, it might become so integrated into our systems we cannot remove it, or it may even hide from us.

You can shrug and say “I’m not going to worry about it” or “I doubt if I will be affected, I’m not online much” but then you become Bryant Gumbel in 1994. Facebook collects data on you even if you are not on it. When the Pittsburgh Steelers changed their name from Heinz Field to Acrisure Stadium many people scratched their heads about this company they never heard of getting their name on their beloved team’s house, but Acrisure is part of the growing AI-driven insurance field, and soon it is likely that all of our insurance dealings will be at least partially AI-designed. AI likely would be nearly impossible to avoid, and such rapid change would be very difficult for some people to handle.

So can we slow all of this down? Not likely. There is money to be made, and slowing it down would require a massive amount of coordination among both partners and competing entities. So we will not have decades to adjust, but more like a small number of years, and in many cases, mere months. I have not even touched on the impacts on education (already causing huge upheaval), how it could affect personal relationships, how our brain might work in a world where everyone has a pocket assistant to think for them, the impact on art such as music (just think if you can get AI to write and perform a song just for you), etc.

I will end my piece with Ezra Klein’s ending from his opinion piece I linked above. He has the last word today.

One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough.

What we cannot do is put these systems out of our mind, mistaking the feeling of normalcy for the fact of it. I recognize that entertaining these possibilities feels a little, yes, weird. It feels that way to me, too. Skepticism is more comfortable. But something Davis writes rings true to me: “In the court of the mind, skepticism makes a great grand vizier, but a lousy lord.”