What Google hides from its users
Victoria University of Wellington Master of Commerce student Cameron Lai explores the perils of reliance on Google, finding users of the search engine can miss out on up to 20 percent of relevant information.
7 June 2019
How much do you rely and trust Google search results to show you all the relevant information you are looking for? Do you make an effort to dig deep into the search results even at lower ranks or do you mainly pick from the results at the top of the list? Would you be surprised to learn you could be missing out on as much as a fifth of the information you are seeking?
It is Google’s algorithms creating those search result rankings for you. And while theoretically the use of algorithmic models to make decisions seems fair and unbiased, it has been found that this is often not the case. Moreover, the inner workings of the algorithms are shrouded in secrecy and opaqueness for the individuals and organisations using them.
Recently, these algorithms that have an impact on our everyday lives have received significant attention. Virginia Eubanks, in her must-read book Automating Inequality, details heartbreaking stories of how the models and algorithms used by both corporations and state organisations have serious consequences for the poor and working class in the US, leading to them being systematically discriminated against. Cathy O’Neil, in her book Weapons of Math Destruction, similarly describes how these models drive inequality.
The cases Eubanks and O’Neil present are often in the US, but that does not mean such stories could not occur in New Zealand too.
In August last year, RNZ reported that the use of Google search by the New Zealand Police may have unwittingly revealed a link between two suspects who were charged with committing a crime together but had no documented history of any joint crime or crime of the same kind. The incident was significant because one of the suspects had official name suppression, so revealing their name could have opened a loophole for defence lawyers to counter the charges on the basis of the suspect’s rights being violated.
It is assumed the police, when investigating the suspects by performing searches using Google, triggered the search engine’s algorithms to learn a connection, which led to the suspects appearing together in Google search results even when searching only for the suspect whose name was not suppressed.
This incident got Associate Professor Markus Luczak-Roesch, from Victoria University of Wellington’s School of Information Management, and I to thinking about what other perils there may be in New Zealand public servants’ reliance on Google search.
Analysing the habits of 30 participants, we quickly established the extent of that reliance.
Most public servants in our study said Google was their first point of inquiry as opposed to other sources such as asking co-workers. They also indicated that they don’t compare their search results with other search engines. They generally believed their work would become of worse quality if they could not use Google, even if they could use other sources of information.
The focus of our research was a Google search personalisation experiment to see how much information Google may hide from us when we conduct a search. The results are highly significant.
In our experiment we found Google personalisation hid up to 20 percent of relevant information. Ultimately, this suggests that when you search for something you may not encounter a significant proportion of relevant information because of personalisation. While this may simply be an inconvenience when you are searching for a place to eat, it can have serious consequences for those who conduct research and knowledge work that leads to policy and/or business decisions, and that is as true for the private sector as it is the public.
To that end, our findings strongly suggest that if search engines such as Google are being used to conduct research, they should be compared with other sources. We also highly recommend that those same search engines should be compared with themselves, using tools that hide your identity and remove personalisation, so you can see what information becomes present from a more ‘objective’ view.
Public sector organisations—and private too—should provide training about the potential algorithmic and human biases that may affect judgments and decision-making, as well as clear guidelines on how to minimise the risk of missing information. It may be that it is even necessary to provide dedicated infrastructure to obfuscate users’ identifies to circumvent personalisation.
After all, 20 percent is a lot of information we are missing out on.
Read the original article on Newsroom.