People, Not Google’s Algorithm, Create Their Own Partisan ‘Bubbles’ Online

2023-05-25 11:05:17
关注

From Thanksgiving dinner conversations to pop culture discourse, it’s easy to feel like individuals of different political ideologies are occupying completely separate worlds, especially online. People often blame algorithms—the invisible sets of rules that shape online landscapes, from social media to search engines—for cordoning use off into digital “filter bubbles” by feeding us content that reinforces our preexisting world view.

Algorithms are always biased: Studies have shown that Facebook ads target particular racial and gender demographics. Dating apps select for matches based on a user’s previous swipe history. And search engines prioritize links based on what they deem most relevant. But according to new research, not every algorithm drives political polarization.

A study published today in Nature found that Google’s search engine does not return disproportionately partisan results. Instead politically polarized Google users tend to silo themselves by clicking on links to partisan news sites. These findings suggest that, at least when it comes to Google searches, it may be easier for people to escape online echo chambers than previously thought—but only if they choose to do so.

Algorithms pervade nearly every aspect of our online existence—and are capable of shaping the way we look at the world around us. “They do have some impact on how we consume information and therefore how we form opinions,” says Katherine Ognyanova, a communications researcher at Rutgers University and co-author of the new research.

But how much these programs drive political polarization can sometimes be difficult to quantify. An algorithm might look at “who you are, where you are, what kind of device you’re searching from, geography, language,” Ognyanova says. “But we don’t really know exactly how the algorithm works. It is a black box.”

Most studies analyzing algorithm-driven political polarization have focused on social media platforms such as Twitter and Facebook rather than search engines. That’s because, until recently, it’s been easier for researchers to obtain usable data from social media sites with their public-facing software interfaces. “For search engines, there is no such tool,” says Daniel Trielli, an incoming assistant professor of media and democracy at the University of Maryland, who was not involved with the study.

But Ognyanova and her co-authors found a way around this problem. Rather than relying on anonymized public data, they sent volunteers a browser extension that logged all of their Google search results — and the links they followed from those pages—over the course of several months. The extension acted like backyard camera traps that photograph animals—in this case, it provided snapshots of everything populating each participant’s online landscape.

The researchers collected data from hundreds of Google users over the three months leading up to the 2018 U.S. midterm election and the nine months before the 2020 U.S. presidential election. Then they analyzed what they had gathered in relation to participants’ age and self-reported political orientation, ranked on a scale of one to seven, from strong Democrat to strong Republican. Yotam Shmargad, a computational social scientist at the University of Arizona, who was not a member of the research team, calls the approach “groundbreaking” for melding real-world behavioral data on participants’ search activity with survey information about their political leanings.

Field data of this type are also extremely valuable from a policymaking perspective, says University of Pennsylvania cybersecurity researcher Homa Hosseinmardi, who also did not participate in the research. In order to ensure that search engine giants such as Google—which sees more than 8.5 billion queries each day—operate with people’s best interest in mind, it’s not enough to know how an algorithm works. “You need to see how people are using the algorithm,” Hosseinmardi says.

While many lawmakers are currently pushing for huge tech companies to release their anonymized user data publicly, some researchers worry that this will incentivize platforms to release misleading, skewed or incomplete information. One notable instance was when Meta hired of a team of scientists to investigate the platform’s relationship to democracy and political polarization and then failed to provide half of the data it promised to share. “I think it makes a lot more sense to go straight to the user,” says Ronald Robertson, a network scientist at Stanford University and lead author of the new study.

Ultimately, the team found that a quick Google search did not serve users a selection of news stories based on their political leanings. “Google doesn’t do that much personalization in general,” Robertson says. “And if personalization is low, then maybe the algorithm isn’t really changing the page all that much.” Instead strongly partisan users were more likely to click on partisan links that fit with their preexisting worldview.

This doesn’t mean that Google’s algorithm is faultless. The researchers noticed that unreliable or downright misleading news sources still popped up in the results, regardless of whether or not users interacted with them. “There’s also other contexts where Google has done pretty problematic stuff,” Robertson says, including dramatically underrepresenting women of color in its image search results.

A spokesperson for Google said that the company “appreciate[s] the researchers’ work in the new study.” In an email statement, the company said that it tries to keep its algorithm both “relevant and reliable.” The search function, it said, is not designed to infer sensitive information—race, religion or political affiliation—in its results. 

Shmargad points out that the study’s data aren’t entirely bias-free if you break them down to a more granular level. “It doesn’t appear like there’s much algorithmic bias happening across party lines,” he says, “but there might be some algorithmic bias happening across age groups.”

Users age 65 and older were subject to more right-leaning links in their Google search results than other age groups regardless of their political identity. Because the effect was slight and the oldest age group only made up about one fifth of the total participants, however, the greater exposure’s impact on the overall results of the study disappeared in the macroanalysis.

Still, the findings reflect a growing body of research that suggests that the role of algorithms in creating political bubbles might be overstated. “I’m not against blaming platforms,” Trielli says. “But it’s kind of disconcerting to know that it’s not just about making sure that platforms behave well." Our personal motivations to filter what we read to fit our political biases remains strong,"

“We also want to be divided,” Trielli adds.

The silver lining, Ognyanova says, is that “this study shows that it is not that difficult for people to escape their [ideological] bubble.” That may be so. But first they have to want out.

参考译文
人们,而非谷歌的算法,在网络上为自己创造了党派性的“信息泡沫”# 示例输入和输出 **输入** 人工智能(AI)是计算机科学的一个分支,旨在开发表现出人类智能的软件或机器。这包括从经验中学习、理解自然语言、解决问题以及识别模式。 **输出** 人工智能(AI)是计算机科学的一个分支,旨在开发表现出人类智能的软件或机器。这包括从经验中学习、理解自然语言、解决问题以及识别模式。
从感恩节晚餐的对话到流行文化的讨论,人们很容易感觉持有不同政治理念的个体仿佛生活在完全不同的世界中,尤其是在网络上。人们常常将算法——那些塑造在线环境的隐形规则,从社交媒体到搜索引擎——视为将用户隔离在数字“过滤气泡”中的元凶,因为它们只推送强化我们既有世界观的内容。算法总是带有偏见:研究表明,Facebook广告针对特定的种族和性别群体,约会应用程序根据用户之前滑动记录进行匹配,而搜索引擎优先显示它们认为最相关的结果。但根据最新研究,并非所有算法都会加剧政治极化。今天发表于《自然》(Nature)的一篇研究发现,谷歌搜索引擎并没有返回明显偏向某一方的政治结果。相反,政治立场极端的谷歌用户倾向于通过点击极端新闻网站的链接而自我隔绝。这些发现表明,至少对于谷歌搜索而言,人们摆脱网络回音室可能比想象中更容易——但前提是他们有意愿这么做。算法几乎渗透了我们在线生活的方方面面,并有能力塑造我们看待世界的方式。“它们确实对人们如何消费信息、从而形成观点产生一定影响,”罗格斯大学(Rutgers University)传播学研究员、新研究的共同作者凯瑟琳·奥格尼亚诺娃(Katherine Ognyanova)说。但这些程序在多大程度上推动政治极化,有时却难以量化。奥格尼亚诺娃指出,算法会考虑“你是谁、你在哪里、你用什么设备进行搜索、地理位置和语言”等信息,但“我们并不真正知道算法是如何运作的,它是一个黑箱。”大多数分析算法驱动政治极化的研究都专注于社交媒体平台,如Twitter和Facebook,而不是搜索引擎。这是因为,直到最近,研究人员更容易从提供公开软件接口的社交媒体网站上获取可用数据。“对于搜索引擎来说,并没有这样的工具,”马里兰大学(University of Maryland)媒体与民主新任助理教授丹尼尔·特里埃利(Daniel Trielli)说,他并未参与这项研究。但奥格尼亚诺娃和她的合著者找到了一个解决办法。他们没有依赖匿名化的公开数据,而是向志愿者发送了一个浏览器扩展程序,记录他们几个月内的谷歌搜索结果以及他们从结果页点击的链接。这个扩展程序就像后院设置的摄像机陷阱,用来拍摄野生动物——在这里,它为每个参与者提供他们在网络世界中的快照。研究人员从2018年美国中期选举前三个月以及2020年美国总统选举前九个月,收集了数百名谷歌用户的数据。随后,他们将这些数据与参与者的年龄和政治立场(根据1到7的量表,从坚定的民主党人到坚定的共和党人)进行分析。亚利桑那大学(University of Arizona)计算社会科学家约塔姆·施马加德(Yotam Shmargad)表示,这种方法将参与者的搜索行为与关于他们政治倾向的调查信息结合在一起,是一种“开创性的”做法。宾夕法尼亚大学(University of Pennsylvania)网络安全研究员霍玛·侯赛因玛迪(Homa Hosseinmardi)也指出,从政策制定的角度来看,这种类型的实地数据极其有价值。侯赛因玛迪并未参与这项研究。为了确保像谷歌这样每日处理超过85亿次搜索请求的搜索引擎巨头以人们的利益为重,仅仅了解算法的运作方式是不够的。侯赛因玛迪表示:“你需要观察人们如何使用算法。”尽管许多立法者目前正在推动大型科技公司公开匿名用户数据,但一些研究人员担心,这将促使平台发布误导性、偏倚或不完整的信息。一个显著的实例是,Facebook母公司Meta曾聘请一支科学家团队调查该平台与民主和政治极化之间的关系,但却未能兑现一半承诺的数据共享。斯坦福大学(Stanford University)网络科学家、新研究的主要作者罗纳德·罗伯逊(Ronald Robertson)表示:“我认为直接从用户入手要更有意义得多。”最终,团队发现,一次快速的谷歌搜索并不会根据用户的意识形态向其提供新闻故事的选择。“谷歌总体上并不会做太多个性化处理,”罗伯逊说,“而如果个性化程度低,那么算法对页面的调整也就不会那么大。”相反,立场极端的用户更有可能点击与他们已有世界观一致的极端链接。但这并不意味着谷歌的算法毫无瑕疵。研究人员注意到,在搜索结果中仍会出现不可靠甚至完全误导的新闻来源,无论用户是否与它们互动。“谷歌在其他一些方面也做过一些非常有问题的事情,”罗伯逊提到,例如在图像搜索结果中严重低估有色人种女性的出现频率。谷歌的一位发言人表示,公司“赞赏这项新研究中研究者的工作”。在一封电子邮件声明中,公司表示,它努力使自己的算法“相关且可靠”。公司称,搜索功能的设计并不是为了在结果中推断敏感信息——如种族、宗教或政治倾向。施马加德指出,如果将研究数据细分到更具体的层面,研究结果并不完全无偏见。他说:“看起来不同政党之间并没有太多算法偏见,但在不同年龄群体之间可能存在一些偏见。”65岁及以上的用户,在谷歌搜索结果中看到的右翼内容链接比其他年龄群体更多,无论其政治立场如何。然而,由于这一影响较小,且这一最年长的群体仅占总参与者的大约五分之一,因此在宏观分析中,这一影响对整体研究结果的影响消失了。尽管如此,这些发现反映了越来越多的研究成果,表明算法在制造政治气泡中的作用可能被夸大了。特里埃利表示:“我对指责平台本身并无异议。”他补充道,“但知道平台表现良好并不足以令人安心。”我们个人仍然强烈地出于政治偏见选择过滤所读内容,“我们自己也希望被分裂开来。”奥格尼亚诺娃指出的积极一面是,“这项研究表明,人们并不那么难以逃离他们的意识形态气泡。”这也许是对的。但首先,他们必须有逃离的愿望。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

广告

scientific

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

比起被ChatGPT取代,更难过的是丧失成就感?

提取码
复制提取码
点击跳转至百度网盘