The other day we looked at the story of a “robot journalist” (actually a clever Artificial Intelligence program) that published a column and fooled quite a few people into thinking it had been written by a human being. That’s obviously a worrisome trend if you happen to be in my line of work, but it still looked like something of an outlier. But was it? There are more examples in the news these days of fancy algorithms managing to land bylines in popular publications.
The most recent one was featured at NBC News this week, but this wasn’t a case of a robot writing an opinion piece on politics or current events. The article appeared in an online newsletter and it was an analysis of personal productivity and how people tend to overthink things too much. It’s a rather tame subject and the article wasn’t particularly noteworthy, but many people commented on it anyway. A few, however, noted that something didn’t look quite right and it was later revealed that the article had indeed been written by a piece of software called Generative Pre-trained Transformer 3, or GPT-3. The person directing the software was Liam Porr, a computer science student at the University of California, Berkeley.
“With something like a language model, there’s not a good quantitative way to see how good it is because language and writing is qualitative,” Porr said. “With this kind of experiment I can concretely say, 20,000 unique people came to my website and only three actually had the sense to say it was written by a robot.”
GPT-3 is not the first natural language program of its kind, but it has already received widespread attention for how good it is at mimicking simple human writing. But its release into the world, while not entirely public, has caused some concern that it could be used to quickly and cheaply generate misinformation or propaganda. Porr’s post, while a harmless experiment, offered a concrete example of that risk.
And it adds GPT-3 to other pieces of advanced software that have been disseminated through the internet and caused alarm.
One of the alarm bells going off because of this development does involve the upcoming elections, but not in terms of pure punditry. It’s being suggested that people could set up bogus social media accounts, blogs, etc. and use GPT-3 to continually crank out bogus stories in an attempt to influence the election. The software would be far more efficient (and cheap) than having humans come up with all of that content on their own. If the masterminds behind this simply feed in the basic topics and false premises they want to promote, GPT-3 could do the rest while the boss tips off for a cold one at the pub.
This may not be all that widespread yet, but the day is clearly coming when this type of automated content creation will be all over the place. While there were a couple of people who commented on the original article linked above possibly being generated by AI, they were only a few out of 20,000 people who viewed the article. I’ll freely admit that if I didn’t already know the background story, I wouldn’t have picked up on it having been generated by a bot. Just take a look at a small sample.
In order to get something done, maybe we need to think less. Seems counter-intuitive, but I believe sometimes our thoughts can get in the way of the creative process. We can work better at times when we “tune out” the external world and focus on what’s in front of us.
I’ve been thinking about this lately, so I thought it would be good to write an article about it.
So what exactly does this mean? Well, for starters, let’s start with some definitions.
The program isn’t pumping out perfect syntax in every line, but that only makes it seem more like a “regular person” wrote it. The language is almost playful at points, asking leading questions and inserting some (mildly) humorous answers. Again, this wasn’t some breakthrough work on the subject of human creativity, but it looked like a legitimate blog entry from someone pondering the topic.
As I mentioned in the last column about robot writers, this is growing increasingly disturbing. This still isn’t true Artificial Intelligence because the program still isn’t producing original thoughts, at least that we know of. But it’s generating viable content. Between this and all of the “deep fake” videos showing up around the web, it’s growing harder and harder to tell what is fact and what is science-fantasy. Combining the fake graphics generation currently available with robot writing, you could definitely launch an army of “people” who have never drawn a breath to barge into online discussions and potentially influence them.
As one final example in closing, I’ll introduce you to a Twitter account that I was recently pointed toward. Every fake person needs a face and at least some sort of backstory, right? Well, the account This Person Does Not Exist generates a collection of them every hour of the day and I’ll confess that I couldn’t tell most of these pictures weren’t selfies from real human beings. Here’s just one recent example.
This person does not exist: Syuzanna Aalmers (20 years), Penn Hills (United States of America). Separated. pic.twitter.com/3VaIzHFD2e
— This person does not exist (@wedontexisthere) September 17, 2020
Great news, guys. She’s available.
Join the conversation as a VIP Member