Why Google’s Supreme Court Case Could Rattle the Internet

2023-02-26
关注

We’ve all lost countless hours to online recommendation algorithms that suggest we might enjoy watching yet another cat video or following just one more influencer. But in mere months, social media platforms may need to find new ways to keep users engaged—and the Internet might be in for a major overhaul.

On Tuesday the Supreme Court began hearing arguments in a case called Gonzalez v. Google, which questions whether tech giants can be held legally responsible for content promoted by their algorithms. The case targets a cornerstone of today’s Internet: Section 230, a statute that protects online platforms from liability for content produced by others. If the Supreme Court weakens the law, platforms may need to revise or eliminate the recommendation algorithms that govern their feeds. And if the Court scraps the law entirely, it will leave tech companies more vulnerable to lawsuits based on user content.

“If there are no protections for user-generated content, I don’t think it’s hyperbolic to say that this is probably the end of social media,” says Hany Farid, a computer scientist at the University of California, Berkeley. Social platforms, such as Twitter and YouTube, rely heavily on two things: content created by users and recommendation algorithms that promote the content most likely to capture other users’ attention and keep them on the platform as long as possible. The Court’s verdict could make either or both strategies more dangerous for tech companies.

Gonzalez v. Google originated in the events of November 2015, when armed men affiliated with the terrorist organization ISIS killed 130 people in six coordinated attacks across Paris. Nohemi Gonzalez, a 23-year-old student, was the only American to die in the attacks. In the aftermath, her family sued Google, which owns YouTube, arguing that the video platform’s recommendation algorithm promoted content from the terrorist group.

Google argues that using algorithms to sort content is “quintessential publishing,” something necessary for users to be able to navigate the Internet at all, and therefore protected under Section 230. That statute, which was originally part of the Communications Decency Act of 1996, states that, under law, computer service providers cannot be treated as the publishers of information created by someone else. It’s a measure dating to the early days of the Internet that was meant to keep technology companies from intervening heavily in what happens online.

“This law was designed to be speech-maximizing, which is to say that by giving companies pretty broad immunity from liability, you allow companies to create platforms where people can speak without a lot of proactive monitoring,” says Gautam Hans, an associate clinical professor of law at Cornell Law School.

Gonzalez argues that recommendation algorithms go beyond simply deciding what content to display, as “neutral tools” like search engines do, and instead actively promote content. But some experts disagree. “This distinction just absolutely does not make sense,” says Brandie Nonnecke, a technology policy specialist and director of the CITRIS Policy Lab, headquartered at U.C. Berkeley. She contributed to a brief about the case that argues that both types of algorithms use preexisting information to determine what content to show. “Differentiating the display of content and the recommendation of content is a nonstarter,” Nonnecke says.

In deciding Gonzalez v. Google, the Supreme Court can follow one of three paths. If the Court sides with Google and declares that Section 230 is fine as is, everything stays the same. At the most extreme, the Court could toss all of Section 230 out the window, leaving tech giants open to lawsuits over not just content that their algorithms recommend but also whatever users say on their sites.

Or the Court can take a middle path, adapting the statute in a specific way that could require technology companies to face some additional liability in specific circumstances. That scenario might play out a bit like a controversial 2018 modification to Section 230, which made platforms responsible for third-party content tied to sex trafficking. Given the constraints of Gonzalez v. Google, modifying Section 230 might involve changes such as excluding content related to terrorism—or requiring companies to rein in algorithms that push ever more extreme content and that prioritize advertising gains over the interests of users or society, Farid says.

Hans doesn’t expect the Supreme Court to release its decision until late June. But he warns that if Section 230 falls, big changes to the Internet will follow fast—with ripples reaching far beyond YouTube and Google. Technology platforms, already dominated by a handful of powerful companies, may consolidate even more. And the companies that remain may crack down on what users can post, giving the case implications for individuals’ freedom of speech. “That’s the downstream effect that I think we all should be worrying about,” Hans says.

Even if the Supreme Court sides with Google, experts say momentum is building for the government to rein in big tech, whether through modifying Section 230 or introducing other measures. Hans says he hopes Congress takes the lead, although he notes that lawmakers have not yet succeeded in passing any new legislation to this end. Nonnecke suggests that an alternative approach could focus on giving users more control over recommendation algorithms or a way to opt out of sharing personal information with algorithms.

But the Supreme Court doesn’t seem likely to step away from the issue, either. A second case being argued this week, called Twitter v. Taamneh, also looks at tech platforms’ liability for proterrorism content. And as early as this fall, experts expect the Supreme Court to take up cases that explore two conflicting state laws about content moderation by social media platforms.

“No matter what happens in this case, regulation of technology companies is going to continue to be an issue for the Court,” Hans says. “We’re still going to be dealing with the Supreme Court and technology regulation for a while.”

参考译文
为什么谷歌的最高法院案件会震动互联网
我们都在在线推荐算法上浪费了无数时间,这些算法建议我们可能喜欢再看一个猫的视频,或者再关注一个有影响力的人。但在短短几个月内,社交媒体平台可能需要找到新的方法来保持用户的参与-互联网可能会进行一次重大改革。周二,最高法院开始审理一个名为Gonzalez v.谷歌的案件,该案件质疑科技巨头是否应该为其算法推广的内容承担法律责任。该案件针对的是当今互联网的基石:第230条,这是一项保护在线平台不为他人制作的内容承担责任的法规。如果最高法院削弱了这项法律,平台可能需要修改或取消管理其feed的推荐算法。如果最高法院完全废除这项法律,科技公司将更容易受到基于用户内容的诉讼。加州大学伯克利分校(University of California, Berkeley)的计算机科学家哈尼·法里德(Hany Farid)说:“如果对用户生成的内容没有保护措施,我认为说这可能是社交媒体的末日并不夸张。”Twitter和YouTube等社交平台严重依赖两件事:用户创建的内容和推荐算法,推荐最有可能吸引其他用户注意力的内容,并尽可能长时间地让他们留在平台上。法院的裁决可能会使科技公司的其中一种或两种策略更加危险。冈萨雷斯诉谷歌案起源于2015年11月的事件,当时隶属于恐怖组织ISIS的武装分子在巴黎发动了六次协同袭击,造成130人死亡。23岁的学生诺赫米·冈萨雷斯(Nohemi Gonzalez)是唯一一名在袭击中丧生的美国人。事后,她的家人起诉YouTube的母公司谷歌,称该视频平台的推荐算法推广了恐怖组织的内容。谷歌辩称,使用算法对内容进行排序是“典型的发布”,是用户能够浏览互联网所必需的,因此受到第230条的保护。该法规最初是1996年《通信规范法》(Communications Decency Act)的一部分,它规定,根据法律,计算机服务提供商不能被视为他人创建的信息的发布者。这是一项可以追溯到互联网早期的措施,旨在防止科技公司大举干预网上发生的事情。康奈尔大学法学院(Cornell law School)临床法学副教授高塔姆·汉斯(Gautam Hans)说:“这部法律旨在实现言论最大化,也就是说,通过给予公司相当广泛的责任豁免,你可以让公司创造一个平台,让人们可以在没有大量主动监控的情况下畅所欲言。”Gonzalez认为,推荐算法不仅仅是像搜索引擎这样的“中立工具”那样简单地决定展示什么内容,而是积极地推广内容。但一些专家不同意。科技政策专家、总部位于加州大学伯克利分校的CITRIS政策实验室主任布兰迪·诺内克(Brandie Nonnecke)说:“这种区分完全没有意义。她在该案的一份简报中提出,这两种算法都使用预先存在的信息来决定显示什么内容。“区分内容的显示和内容的推荐是不可能的,”Nonnecke说。在决定冈萨雷斯诉谷歌案时,最高法院可以遵循三种路径之一。如果法院站在谷歌一边,并宣布第230条是好的,一切都保持不变。在最极端的情况下,最高法院可能会把第230条的所有内容都扔到一边,让科技巨头面临诉讼,不仅涉及他们的算法推荐的内容,还涉及用户在其网站上说的话。 或者法院可以采取中间路线,以一种特定的方式调整法规,可能要求科技公司在特定情况下承担一些额外的责任。这种情况可能会有点像2018年对第230条进行的有争议的修改,该修改要求平台对与性交易有关的第三方内容负责。法里德说,考虑到冈萨雷斯v.谷歌案的限制,修改第230条可能会涉及一些变化,比如排除与恐怖主义有关的内容,或者要求公司控制那些推动更极端内容的算法,以及将广告收益置于用户或社会利益之上的算法。汉斯预计最高法院要到6月底才会公布判决。但他警告说,如果第230条被废除,互联网将迅速发生巨大变化,其影响将远远超出YouTube和谷歌。已经由少数几家强大公司主导的技术平台可能会进一步整合。留下的公司可能会对用户可以发布的内容进行打击,这对个人言论自由产生了影响。汉斯说:“我认为这是我们都应该担心的下游效应。”专家表示,即使最高法院支持谷歌,政府也有可能通过修改第230条或引入其他措施来约束大型科技公司。汉斯说,他希望国会发挥带头作用,不过他指出,议员们还没有在这方面通过任何新的立法。Nonnecke建议,另一种方法可以专注于让用户对推荐算法有更多的控制权,或者让用户选择不与算法共享个人信息。但最高法院似乎也不太可能回避这个问题。本周正在讨论的第二起案件,名为Twitter v. taamh,也关注科技平台对支持恐怖主义内容的责任。专家们预计,最早在今年秋天,最高法院将受理有关社交媒体平台内容审核的两个相互冲突的州法律的案件。汉斯说:“无论本案发生什么,对科技公司的监管将继续成为法院的一个问题。”“在一段时间内,我们仍将与最高法院和技术监管打交道。”
  • en
您觉得本篇内容如何
评分

相关产品

EN 650 & EN 650.3 观察窗

EN 650.3 version is for use with fluids containing alcohol.

Acromag 966EN 温度信号调节器

这些模块为多达6个输入通道提供了一个独立的以太网接口。多量程输入接收来自各种传感器和设备的信号。高分辨率,低噪音,A/D转换器提供高精度和可靠性。三路隔离进一步提高了系统性能。,两种以太网协议可用。选择Ethernet Modbus TCP\/IP或Ethernet\/IP。,i2o功能仅在6通道以太网Modbus TCP\/IP模块上可用。,功能

雷克兰 EN15F 其他

品牌;雷克兰 型号; EN15F 功能;防化学 名称;防化手套

Honeywell USA CSLA2EN 电流传感器

CSLA系列感应模拟电流传感器集成了SS490系列线性霍尔效应传感器集成电路。该传感元件组装在印刷电路板安装外壳中。这种住房有四种配置。正常安装是用0.375英寸4-40螺钉和方螺母(没有提供)插入外壳或6-20自攻螺钉。所述传感器、磁通收集器和壳体的组合包括所述支架组件。这些传感器是比例测量的。

TMP Pro Distribution C011EN RF 音频麦克风

C011型直通台式边界层话筒采用了非常坚固的外壳设计。它们自上而下由实心黄铜制成,确保在最极端环境下的可靠性。它们具有一个内置的幻影电源模块,该模块具有完全的射频保护,以防止在800兆赫-1.2兆赫频段工作的GSM设备的干扰。极性模式:全向频率响应:50赫兹-20千赫灵敏度:-42dB+\/-3dB@1千赫(0dB=1 V\/Pa)阻抗:200欧姆 S\/n比率:58dB最大SPL:120dB 1%THD电源要求:9-48伏幻像电源终端:外接3针XLR

ValueTronics DLRO200-EN 毫欧表

"The DLRO200-EN ducter ohmmeter is a dlro from Megger."

评论

您需要登录才可以回复|注册

提交评论

广告

scientific

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

2023年数字化战略中不包括物联网的后果

提取码
复制提取码
点击跳转至百度网盘