Google’s Digital Truth Serum: Could a New Algorithm Cut Down on Internet Lies, Disinformation, and Woo? March 4, 2015

Google’s Digital Truth Serum: Could a New Algorithm Cut Down on Internet Lies, Disinformation, and Woo?

It’s almost impossible to traverse the Internet for even an hour without coming across a hoax, a half-truth, or a bit of satire that others (never we, of course!) think is real. If you’re anything like me, you’ve spent hundreds of hours checking out and refuting such fare (Dutch boy, dike, you get the picture).

What if the bullshit sank to the bottom, floating way down the Google page ranking to a spot where we could still find it if if we wanted to, but where few would likely just chance upon it?

We may get there.

A team of computer scientists at Google has proposed a way to rank search results not by how popular web pages are, but by their factual accuracy.

To be really clear, this is 100 percent theoretical: It’s a research paper, not a product announcement or anything equally exciting. (Google publishes hundreds of research papers a year.) Still, the fact that a search engine could effectively evaluate truth, and that Google is actively contemplating that technology, should boggle the brain. After all, truth is a slippery, malleable thing — and grappling with it has traditionally been an exclusively human domain.

There are caveats and worries. For starters, not all truth claims are equal. It should be easy for Google’s computers to instantly verify whether a web page is correct when it publishes the president’s birthday, or when a blogger asserts that London is the capital of China. But, at some point, will the superbrain also weigh in on whether coffee enemas can cure cancer, or on whether Islam is an inherently violent religion?

Will it see that phrase “London is the capital of China,” as I used it above, and understand that, in context, it is not as geographically illiterate as it seems?

Will the Google algorithm push the habitually mendacious Daily Mail newspaper into the search-results desert and give ranking preference to publications that strive harder to be truthful, such as the New York Times?

Would we want it to, given how easily this technology could, in theory, be manipulated?

I’m cautiously optimistic. Anything that tackles the problem of unreliable information, and aims to serve up more vetted facts, is worth thinking (and dreaming) about. Consider:

In one trial with a random sampling of pages, researchers found that only 20 of 85 factually correct sites were ranked highly under Google’s current scheme. A switch could, theoretically, put better and more reliable information in the path of the millions of people who use Google every day. And in that regard, it could have implications not only for SEO — but for civil society and media literacy. …

“How do you correct people’s misconceptions?” Matt Stempeck, the guy behind LazyTruth, asked New Scientist recently. “People get very defensive. [But] if they’re searching for the answer on Google they might be in a much more receptive state.”

There are already precedents for this.

Just three weeks ago, Google began displaying physician-vetted health information directly in search results, even commissioning diagrams from medical illustrators and consulting with the Mayo Clinic “for accuracy.” Meanwhile, Facebook recently launched a new initiative to append a warning to hoaxes and scams in News Feed, the better to keep them from spreading.

Provided these initiatives are smartly and ethically implemented and administered, this is a development whose time has come.

(Image via Shutterstock)


Browse Our Archives

What Are Your Thoughts?leave a comment