Premium

Is Musk's AI Chatbot Biased?

Photo by Jordan Strauss/Invision/AP

Another week goes by and another controversy breaks out over the Artificial Intelligence chatbots that seem to be worming their way into every aspect of our online lives, whether we're aware of it or not. This week, the contentious claims are focused on Elong Musk's "Grok." That's the AI Chatbot built into the "X" (Twitter) interface. Users can ask it questions about topics of interest and it will fill in some background information to the best of its ability. I spend most of my time with ChatGPT 4o and have only dabbled a bit with Grok out of curiosity because the initial results I received weren't all that impressive. It's not that they were "wrong" so much as they seemed incomplete in significant ways. But others have had experiences making them believe that Grok actually has a conservative bias and delivers information unfavorable to liberals. Several Secretaries of States have now demanded that Musk fix the situation. Could that be true? Perhaps. CNBC has more details on the latest complaints.

Five secretaries of state on Monday urged Elon Musk to fix his social media platform X's artificial intelligence search assistant after it allegedly shared false information about the 2024 presidential election.

The secretaries in a letter to Musk said that X's AI chatbot Grok misled users about ballot deadlines in numerous states shortly after President Joe Biden dropped his reelection bid against former President Donald Trump on July 21.

Musk, the billionaire CEO of Tesla and SpaceX, had endorsed Republican presidential nominee Trump before Biden quit the race and backed Vice President Kamala Harris as the Democratic nominee.

It seems to me that there are three fundamental questions to address here. Was Grok's information correct? If not, why was it incorrect? And finally, was the error accidental or intentional because of bias? As to the first, the information was mostly wrong. It was not legally impossible to make changes to the ballots in those states at that time. However, Grok relies on its historical data library and newer data that it can glean from relatively current online sources. In several states, including Ohio, the traditional deadline for making ballot changes had indeed passed, but those states had to make rules changes or legislative shifts to extend the deadlines to accommodate the mess that Joe Biden and the Democrats had created. Grok likely was not up to date on those changes. This paragraph basically addresses the first two questions.

So was there inherent bias involved on the part of Elon Musk or (far more likely) the programmers that developed Grok? The popular perception about Elon Musk changed radically after he began advocating for free speech and eventually endorsed Donald Trump. In the eyes of the leftist elites, he was suddenly "a bad dude." So any time that a product of one of his companies produced something that could be perceived as providing an advantage to Trump and a disadvantage to the Democrats, it simply must be a nefarious plot to interfere in the election! That's simply balderdash. Elon didn't write the code for Grok or load its massive data library or select the contents. That's not his forte. He pays other people to do those things. And we never heard any of these complaints about his products before his Trump endorsement.

If you click on the "Artificial Intelligence" content tag at the bottom of this article you can scroll back and review some of our coverage of previous bias complaints against all of the AI chatbots. And there have been many. Every bot has faced this scrutiny, including ChatGPT's earlier editions. Some of those complaints were far more compelling than the complaints against Grok. But unless you believe that the AI has already "woken up" and has a political bias, it's simply a result of disparities that exist in the source material in the bot's library. No human can go into the code in real-time and tailor the answers that the bots give to favor one political faction over the other. There are too many requests flying through at lightning speed and the answers are delivered too quickly for any human to manage. 

These AI systems are doing what they are doing based on their underlying code designs and the libraries of data they were fed. Allowing them to tap into internet resources to obtain newer information independently so they can stay up to date is problematic for a number of reasons, but there has been no indication that the bots are seeking out incorrect information to provide biased responses. The bots aren't even aware of what information is "correct" or "incorrect." They only digest, process, and regurgitate the data they have in a conversational form. This is a non-controversy as far as I'm concerned. But you should always double-check any information you receive from a chatbot before you start posting it and potentially embarrassing yourself. 

Trending on HotAir Videos

Advertisement
Ed Morrissey 10:00 PM | November 20, 2024
Advertisement
Advertisement
Advertisement