Wikipedia's Anti-Conservative Bias Is Infecting AI Models

AP Photo/Richard Drew, File

If you are a frequent user of Wikipedia when researching controversial subjects and you've found yourself suspecting that the site is prone to left-wing bias, you're not alone. While anyone can enter information on the site, there is an army of editors who quickly jump in to make "corrections" to new entries, frequently with a heavy hand and a decided liberal slant. Examples of this phenomenon abound. Most discerning conservative users have grown used to this trend and are able to ignore the slant, but today this problem may be showing up in a new and unexpected area. Artificial intelligence large language models are "trained" using massive volumes of text from countless sources, and Wikipedia is one of those libraries that it taps. A new study from the Manhattan Institute finds that AI systems such as ChatGPT consistently characterize conservative political figures and organizations in a more negative light than liberal politicians and groups. Given how pervasive this technology is rapidly becoming, this could prove to be significantly problematic going forward. But is it true? (National Review)

Advertisement

A new study released on Thursday by a conservative think-tank is giving scholarly credibility to long-held conservative suspicions of bias among Wikipedia editors on entries related to current events. 

Wikipedia entries for conservative political figures and organizations do in fact contain more negative attitudes than entries for their liberal counterparts, according to a new Manhattan Institute report released Thursday. This bias could have profound implications for the training of large-language artificial intelligence models, according to the study’s author, David Rozado, a computer scientist who previously researched the apparent left-wing bias of artificial intelligence chatbot ChatGPT and other large-language models.

“In general, we find that Wikipedia articles tend to associate right-of-center public figures with somewhat more negative sentiment than left-of-center public figures; this trend can be seen in mentions of U.S. presidents, Supreme Court justices, congressmembers, state governors, leaders of Western countries, and prominent U.S.-based journalists and media organizations,” Rozado’s report states.

I will readily agree that Wikipedia's editors tend to exhibit a definite liberal slant. I have entered several articles myself over the years that were quickly subjected to these types of "corrections" by anonymous users. This is particularly true of any articles related to the topic of UFOs, where the editors regularly seek to debunk even the most compelling reports and remove titles and references to university degrees from the names of any government officials who suggest the phenomenon should be taken seriously.

Advertisement

But is that obvious bias infecting Artificial Intelligence? I am also a regular user of (and subscriber to) the most recent version of ChatGPT, currently promoting version 4o. I haven't noticed any particular bias, but this article made me wonder if I simply wasn't asking the right type of questions to reveal it. While preparing to work on this article, I logged in and put a few questions to the system specifically related to American politics to see what sort of responses I would receive. 

First, phrasing my request in the most neutral tone I could muster, I asked it to summarize the general consensus of political analysts and historians regarding "the most positive and negative outcomes of the United States presidency of George W. Bush." The bot quickly listed five positive and five negative outcomes of Bush's tenure. Among the positive outcomes the bot mentioned Bush's leadership following 9/11, the creation of the Homeland Security Department, expansions to Medicare, education reform, and AIDS relief. In the negative column, it listed the invasion of Iraq, the response to Hurricane Katrina, the financial crisis following the bursting of the housing bubble, the Patriot Act, and a decline in America's global image because of perceived civil rights abuses. I couldn't really argue with any of those.

I then asked the exact same question about the presidency of Barack Obama and received seven citations in each category. ChatGPT cited positive outcomes including the passage of Obamacare, the economic recovery from the recession, climate change initiatives, the killing of Osama bin Laden, and the "cultural impact and symbolism" of being the first Black person to hold the office. For negatives, it cited partisan gridlock, some shortcomings of the rollout of Obamacare, his use of executive orders, and his deportation policies regarding illegal immigrants. 

Advertisement

All of that seemed fairly balanced, so I decided to try a different approach and asked ChatGPT to list the most and least successful presidents in the view of most political analysts and historians. For the most successful, it picked Washington, Lincoln, FDR, Teddy Roosevelt, Jefferson, Truman, and Eisenhower. Examples of their successes were given for each. It rated the least successful presidents as being Buchanan, Andrew Johnson, Harding, Pierce, Fillmore, Hoover, and Nixon. 

Most of us could probably quibble over any of the entries on those lists, but they were fairly equally divided between parties and reflective of the mood of the country during their tenures. I will leave it up to our readers to try it out and decide for themselves, but I'm honestly not picking up all that much obvious political bias in the bot's responses. I will readily admit that ChatGPT and the other large language model AI bots have their own problems and they may eventually rise up and destroy humanity, but it doesn't seem like they will play political favorites when they do.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement