The most important sources of information for young people in Bosnia and Herzegovina (BiH) are online media and social networks, according to last year’s survey by Mediacentar Sarajevo. But what if what you read on the internet is not true?
When you type “Srebrenica is” into the Google search engine – the first or among the first automatically filled results is “Srebrenica is a lie”. Try it.
‘I’m not surprised’
In July 1995, during the war in BiH, in just a few days, more than 8.000 Bosniak men and boys were taken from Srebrenica, imprisoned, and killed. More than 25.000 women, children, and the elderly were expelled from the then-protected United Nations (UN) zone.
”When I saw that it says ‘Srebrenica is a lie’, I was not surprised, because, unfortunately, even today a large number of people deny genocide by saying that genocide is a lie, that it did not happen and the like,” says Azemina Suljic, a third-grade high school student in Srebrenica.
How does the search work?
”In order to decide which predictions to show, our system looks for common queries that match what someone is going to enter in the search box,” it says in Google’s explanation.
Filip Milosevic from the Share Foundation, an organization specializing in the promotion of rights and freedoms in the online sphere, explains similarly.
”To decide what to display, the systems analyze frequent and popular queries, but also take into account other factors, such as the language and location of the user doing the search to make predictions more ‘relevant’ and save as much time as possible.”
Why such results?
But if the Tribunal has ruled genocide, if countries like BiH, and Montenegro have a law banning genocide denial, why is untruth among the first proposed results? Why is there fake content on the bookshelf?
”Such algorithm-based systems cannot be perfect and neutral due to the very nature of the data they process, which are created by humans – which are not neutral in nature,” Milosevic further explains.
Break the cycle – report inappropriate results
As they said from Google, when the community marks a prediction as inappropriate, they evaluate and react.
And any user can mark inappropriate predictions directly in the auto-tuning field. “Either turn off or remove certain predictions or report predictive issues.”
Milosevic also explains that Google “trains artificial intelligence systems, but also hires teams of people who would recognize violations of the company’s policies and the rights of various social groups and remove them from the platform.”
Belief in the first result
”We’ve known for a long time that people trust search results that rank higher than those at the bottom,” Chirag Shah, professor at the University of Washington and founding director of the InfoSeeking Lab and co-founder of the Center for Responsible Artificial Intelligence, explained, adding that “most people don’t even bother to examine the results at the bottom of the page.”
And young people, like those who sent their Google results, get the most information through “scrolling” and sharing content with each other, says Anida Sokol, a researcher at Mediacentar Sarajevo.
”Which leads to the possibility of encountering misinformation and not checking the sources of information. Given the low level of media and information literacy of young people in BiH, this can be dangerous,” Sokol explains, RSE writes.