“Did the Holocaust happen?” is not a question up for debate. You don’t have to search very hard to find irrefutable evidence. Ask the thousands of Holocaust survivors still alive today for confirmation. Ask Google, however, and the answer is not so clear.
We search Google for everything from cheap concert tickets to long-lost relatives. We place such confidence in the convenience of the search bar that people can settle heated arguments just by whipping out a smartphone. But last month, The Guardian demonstrated the danger of blindly trusting these results. A reporter found that the top organic search result for the query “Did the Holocaust happen” was not an authoritative website providing historical details, but the neo-Nazi site Stormfront instead. This reveal prompted many to ask: Should Google really give uncontested legitimacy to Holocaust deniers?
Initially, Google did not want to alter its search rankings, which would require tweaking its notoriously secret proprietary algorithm. After The Guardian’s article led to bad press, the company eventually made the changes, but they have not completely stuck, sparking debate over the finality of the fix. Stormfront now sits back in the top slot for that search about the Holocaust.
Back in 2004, Google Search came under fire from the Anti-Defamation League for a similar controversy. A search for the word “Jew” gave prominence to, you guessed it, Holocaust denials. Google first refused requests then as well, only to later amend its algorithm.
In these conversations, the discussion always focuses on ways the user can influence the algorithm. But if a troll trained in search engine optimization can quickly upend any attempts to control this algorithm, Google needs to change the way it thinks. Here’s how it can organize search results in a way that’s clear, coherent, and responsible.
1. Give prime real estate to reputable sources
Instead of creating a turf war, Google should prioritize authoritative information by grouping credible links into segments under headers such as “Reference.” When it comes to questions that have definitive answers, Google has a responsibility to deliver them. In the case of casting doubts on a documented genocide, what purpose does a forced “neutrality” serve?
Google has already flirted with this technique by quietly introducing “featured snippets” to answer questions in search. These pull content from external sites deemed authoritative enough by the algorithm. For example, the query “how do I bake an apple pie” returns a featured snippet that contains a link to a 2014 recipe, a picture of a pie, and a succinct summary. These snippets are embedded at the top of the page, which is precedent for bringing more focus to credible content. However, Google does offer the disclaimer: “Like all search results, featured snippets reflect the views or opinion of the site from which we extract the snippet, not that of Google.”
Of course, there is a business objective for featured snippets and their prominent placement at the top of search engine results pages (SERPs). “They’re trying to answer the question better, but they’re also keeping people on Google,” Michael Tesalona, founder of the SEO firm Bradford & Crabtree LLC, told me.
If Google ensured that the snippets would be credible, it would only help its cause. Consistent segmentation would eliminate confusion and instill much needed confidence. Positioning reliable information at the top of the hierarchy does not promote censorship; it just means disreputable information would not get top billing.
“People have agency,” Tesalona said. “By searching for something, they’re expressing that agency. If they don’t see what they want, they will naturally continue looking.”
2. Draw from fact-checking plug-ins
Anyone who received chain emails on their Hotmail account knows “fake news” is not a novel phenomenon. To help readers think more critically about the information they see online, everyone from news outlets to high school students have been developing browser plug-ins that, once installed, provide labels on content that indicate credibility, such as “verified” or “unverified.” As the first and last stop for so much research, Google could be a critical resource in this effort. Recently, Google booted 200 unnamed publishers from its ad networks to combat the proliferation of sites impersonating established news outlets. This is an interesting next step in weeding out phony content to be sure, but one that relies on mysterious backend decisions instead of transparency.
Whether it wants to acknowledge it or not, Google is an arbiter of truth.
Google may not want the responsibility of verifying every link it crawls, but it could provide more information before a user clicks. SERPs already include some labels: Sponsored links get flagged as “Ad” at the top of the page to make it clear that someone paid for the placement through AdWords. In the past, Google has used SERPs to test other indicators of a site’s performance, such as whether it is mobile-friendly or loads slowly. It’s not that much of a stretch for Google to adopt other labels that could help identify quality and credibility. For instance, an industry marker could quickly communicate if an article comes from the government, a nonprofit, an educational website, a media publication, or a business site.
After the 2016 presidential election, Facebook took heat for being another platform propagating bad information. In response, it has since taken steps to curb the spread of false information, including making it easier for users to flag content for review by third-party fact-checking organizations. Facebook is also trying to penalize misleading sites by preventing them from benefiting from ad revenue. Based on its willingness to remove hundreds of publishers from its ad network altogether, Google may be willing to adopt a similar approach.
3. Keep semantic search humane
With natural language processing accurately parsing a search, and neural networks retrieving items faster and with more context, artificial intelligence will transform the way we answer questions in school, work, and beyond. Yet, more powerful technology will also require a more intense focus on how search engines turn information into knowledge.
To see how fast biases can get ugly if unchecked, just look at Microsoft’s 2016 chatbot disaster. Within 24 hours of releasing Tay, a social media bot that was supposed to learn how to speak like a millennial, trolls had trained it to declare that “Hitler was right” among other hateful phrases.
Google has a well-documented interest in artificial intelligence for search-related purposes, but it has also acknowledged the challenges of scaling human expertise responsibly. At least for the foreseeable future, humans should still oversee algorithms to prevent them from burying reliable information. If a query like “Did the Holocaust happen?” arises, a designated section on a SERP could unpack related questions in natural language. This segment should link only to authoritative sources as a worthwhile step to truly finding the best answers. Whether it wants to acknowledge it or not, Google is an arbiter of truth.
Consumer demand for smarter, more humane search is certainly there. After all, Google was only willing to affect search results after users pointed out a hands-off approach ran counter to the company’s “don’t be evil” ethos. In November, when Google’s algorithm effectively validated incorrect election results by citing an illegitimate source, the company was widely critiqued for its role in spreading false information. (It is now working harder to filter out bad pages from its algorithm, though still refusing to let users in on the specifics.)
Still, users should be optimistic about shaping the Google search experience over time.
“There is an extremely strong alignment of incentives,” Tesalona said. “As a user you want the best results. As a search engine, they want to give the best results. That sort of invisible hand is healthy and working correctly.”