Google's search algorithm could steal the presidency

One group saw positive articles about one candidate first; the other saw positive articles about the other candidate. (A control group saw a random assortment.) The result: Whichever side people saw the positive results for, they were more likely to vote for—by more than 48 percent. The team calls that number the “vote manipulation power,” or VMP. The effect held—strengthened, even—when the researchers swapped in a single negative story into the number-four and number-three spots. Apparently it made the results seem even more neutral and therefore more trustworthy.

But of course that was all artificial—in the lab. So the researchers packed up and went to India in advance of the 2014 Lok Sabha elections, a national campaign with 800 million eligible voters. (Eventually 430 million people voted over the weeks of the actual election.) “I thought this time we’d be lucky if we got 2 or 3 percent, and my gut said we’re gonna get nothing,” Epstein says, “because this is an intense, intense election environment.” Voters get exposed, heavily, to lots of other information besides a mock search engine result.

The team 2,150 found undecided voters and performed a version of the same experiment. And again, VMP was off the charts. Even taking into account some sloppiness in the data-gathering and a tougher time assessing articles for their positive or negative valence, they got an overall VMP of 24 percent. “In some demographic groups in India we had as high as about 72 percent.”