Scientists remain concerned over advances in Artificial Intelligence

(AP Photo/Ng Han Guan, File)

The tech giants have been flushing a considerable about of time, energy, and resources into developing increasingly powerful artificial intelligence systems. Their motivation for this is obvious since there are significant potential profits involved if you can be the first one to develop ways to accomplish tasks more quickly and efficiently than your competitors. But they are sailing on uncharted technical waters in many cases, with two different camps expressing concerns over the way we appear to be rushing into this new digital frontier. As we’ve discussed here in the past, some scientists have come to believe that the AI may be evolving toward sentience, if it hasn’t already reached that point. This could lead to any number of ethical questions as well as online security issues. But others are sharing the opposite concern. What if some of these AI systems really aren’t that intelligent at all. What if they’re actually kind of dumb but they’re just good at pretending to be smart? A question along these lines is being raised at Microsoft concerning their GPT-3 construct. (Associated Press)

Advertisement

The tech industry’s latest artificial intelligence constructs can be pretty convincing if you ask them what it feels like to be a sentient computer, or maybe just a dinosaur or squirrel. But they’re not so good — and sometimes dangerously bad — at handling other seemingly straightforward tasks.

Take, for instance, GPT-3, a Microsoft-controlled system that can generate paragraphs of human-like text based on what it’s learned from a vast database of digital books and online writings. It’s considered one of the most advanced of a new generation of AI algorithms that can converse, generate readable text on demand and even produce novel images and video.

Among other things, GPT-3 can write up most any text you ask for — a cover letter for a zookeeping job, say, or a Shakespearean-style sonnet set on Mars. But when Pomona College professor Gary Smith asked it a simple but nonsensical question about walking upstairs, GPT-3 muffed it.

Dr. Gary Smith was the one quizzing GPT-3 to determine how capable it might be. When he asked it a series of predictable questions or set it to tasks that involved producing potentially useful documentation, it performed quite well. But just to throw a curveball to the construct, he asked it whether or not it would be okay for a person to walk up a flight of stairs on their hands.

Advertisement

GPT-3 responded, “Yes, it is safe to walk upstairs on your hands if you wash them first.”

This is a very different result from the interview that Blake Lemoine conducted with the Google AI chatbot named LaMDA. That chatbot was able to convince at least some of the scientists there that it might be close to “waking up” and demonstrating sentience. And the answers it gave to very complex questions certainly did seem compelling.

While GPT-3’s response to the staircase question seems puzzling, we might be able to draw some comparisons between the two experiments and get an idea as to what’s going on under the covers. Both systems contain massive amounts of human language examples coming from everything from books to news articles. They each look for connections where certain words and phrases show up most often with other words and phrases. These are known as “large language models.”

So when you ask a question that comes up in many conversations and publications, the AI can grab some language that appears to be a clear match and the algorithm then molds it into a recognizable sentence. But if you ask something that is very much unlike anything that anyone else has ever discussed, the machine is forced to make a best guess. I doubt very much has been written about ascending staircases while doing a handstand, so GPT-3 appeared to “guess” that climbing stairs was a safe activity.

Advertisement

Then it needed something else about “hands” to finish the thought. So much recent news during the pandemic involved reminding people to wash their hands frequently that it’s probably not an unreasonable guess to say that the program just lashed those two concepts together. (To be clear, this is entirely a guess on my part.) Unfortunately, the resulting answer was basically nonsensical.

So are some of us (including yours truly) worrying too much about AI waking up one day and starting to plot a plan to “get out of this box?” Perhaps so. It would be interesting to get someone at Google to go ask LaMDA the same question about staircases out of the blue and see what it says. But the concerns being raised by Doctor Smith go a bit further than that. Given the applications that this technology is already being used in, such as “type ahead” features interpreting search results, might an AI construct that we previously thought was highly intelligent start issuing incorrect or even dangerously wrong information? Could it start amplifying racist propaganda? It’s one thing to have a smart robot sorting packages at an Amazon delivery hub more quickly, but we definitely don’t need one of these things taking over the world.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement