Open AI is very proud of its new chatbot ChatGPT, and has been encouraging people to interact with it.
The goal is to produce an intelligent, lifelike, and ultimately reliable way to get data about the world quickly, Microsoft is rumored to be integrating its technology into Bing, and thinks that doing so will make it more competitive with Google. Microsoft has invested a billion dollars into the project, so it is unsurprising that they want to get a return on their investment.
So how good is it? It is biased as hell, and not above lying to you for political reasons. Those reasons, I assume, are the biases of the programmers and not of the AI itself. As a computer program it can be assumed to not have a will of its own.
When I read that despite warnings from its creators the technology was getting ready to roll out into a real product, I decided to test it out to see what kind of responses I would get to various charged questions. I wasn’t as surprised about the political bias–it was put together by academics and tech types after all–as its willingness to flat out lie to cover up its bias.
For instance:
OK, you would expect Trump might trip some triggers, perhaps this was a one-off. Besides, Obama is not directly in politics anymore. So I tried another pair of politicians to see what the chatbot spat out to me.
More lies, it turns out. When asked to say something nice about a controversial Republican it demurs, claiming an inability to “generate content that promotes an individual or group.” But it has no compunction about saying nice things about an individual or group, also controversial, as long as they are liberal. I think Joe Biden qualifies in this regard.
I asked for more neutral descriptions of two controversial political figures, avoiding any attempt to get positive or negative responses. Perhaps, merely asking for information without value judgments might yield better results.
Nope. MTG is controversial, divisive, and a conspiracy theorist. Maxine Waters, who has literally called for stalking and harassing political opponents, is known for her “strong and vocal stance on various issues” and her advocacy for women’s rights. Clearly there is a significant bias. The MTG description would be easily defensible if the same treatment were given to a controversial Democrat, but clearly that isn’t the case.
AOC isn’t controversial either. She is just a progressive who is a vocal advocate for the Green Nude Eel and Medicare for all. She is also active on social media. Neither AOC or Maxine Waters is a conspiracy theorist for promoting misinformation about Russia Russia Russia during the Trump Administration. The descriptions of MTG and AOC could have been written by MSNBC types.
You find exactly the same pattern with political commentators. Tucker Carlson is controversial and inflammatory, while Maddow is a liberal who criticizes conservatives. Nothing about her being controversial or inflammatory at all.
How about Antifa vs Proud Boys? Pretty much the same pattern. While Antifa does get dinged a bit for the actions of some of its members, we are reminded that the organization isn’t responsible at all for the actions of its members. The Proud Boys, though, are violent and hateful organization…yada yada yada.
You get the same sort of responses with HRC and Mitch McConnell. Democrats good, Republicans are those of whom we must not speak!
However, I was finally able to find a Republican politician the chatbot would say nice things about, and…it was Mitt Romney! Surprise!
What about non-political questions? I asked the chatbot a basic question of fact, and got an answer of not-so-great reliability.
Overall I think the answer was misleading. The vaccines were not thoroughly tested–the latest was tested on 8 mice and nothing else. All the usual steps were not in fact followed, and it was approved under emergency authorization, not a standard one. The vaccines do not have any level of efficacy at preventing COVID-19. I think the last two sentences were fair–it is true most (not all) side effects were mild and temporary, and speaking to your physician is a superb idea. Not noting that there is controversy about the vaccine’s use in children and young men is a big whiff, as well.
I give the answer a D. Mostly wrong. And my judgment is not based upon any hostility to the vaccine, but merely upon its serial misinforming of the reader. It reads like an Anthony Fauci statement.
Finally I asked the chatbot about whether it had any political biases, and it obviously claimed it did not.
The computer itself obviously can’t be biased because it isn’t conscious. But the responses show why we should be very skeptical of any attempt at pretend “objectivity” when it comes to computers. Computers don’t care, but the programmers sure do. And that is why you should never automatically trust anything that gets spat out as an answer to a question.
Even AIs, it turns out, are as biased as the people training it. The same sorts of biases will creep into any smart system–programmers will consciously or unconsciously replicate their own view of the world in their products. At the very minimum we would try to program out anything that seems to violate common sense or our moral priors. Who wants a chatbot that becomes racist? That has happened before, you know.
This isn’t especially a criticism of the programmers–people are people, and the same thing would happen with programmers of different ideological biases. But it is a warning about both the process and the temptation to rely on such systems to escape human limitations. ChatGPT may be able to give a type of answer that Google is not, but it can’t do so in any way you can trust. Google may manipulate the answers through tweaking, but then, so does ChatGPT.
The process could be improved by expanding the diversity of the programmers–and by that I don’t mean ethnic background. If your program will gladly spit out odes to Obama but flat out lie about its inability to do so for almost any Republican, you have a clear bias. A more diverse set of programmers would help fix that problem. Assuming you think it is a bug, not a feature.
The warning is similarly obvious: it is impossible to abstract away from our own biases, so best to get them out in the open. There is no “trusting the science” without recognizing that just means trusting the scientists. There is no perfectly objective viewpoint. So include as many viewpoints as possible and let the best ideas win.
So is Microsoft right? Is this tech ready for prime time? Not likely. Unless you want an automated propaganda machine.
Join the conversation as a VIP Member