首页
外语
计算机
考研
公务员
职业资格
财经
工程
司法
医学
专升本
自考
实用职业技能
登录
考研
Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment a
Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment a
admin
2022-11-01
58
问题
Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment and misinformation. Most recently, far-right agitators posted openly about plans to storm the U.S. Capitol before doing just that on January 6. One solution might be AI: developing algorithms to detect and alert us to toxic and inflammatory comments and flag them for removal. But such systems face big challenges.
The prevalence of hateful or offensive language online has been growing rapidly in recent years, and the problem is now rampant. In some cases, toxic comments online have even resulted in real life violence, from religious nationalism in Myanmar to neo-Nazi propaganda in the U.S. Social media platforms, relying on thousands of human reviewers, are struggling to moderate the ever-increasing volume of harmful content. In 2019, it was reported that Facebook moderators are at risk of suffering from PTSD as a result of repeated exposure to such distressing content. Outsourcing this work to machine learning can help manage the rising volumes of harmful content, while limiting human exposure to it. Indeed, many tech giants have been incorporating algorithms into their content moderation for years.
One such example is Google’s Jigsaw, a company focusing on making the internet safer. In 2017, it helped create Conversation AI, a collaborative research project aiming to detect toxic comments online. However, a tool produced by that project, called Perspective, faced substantial criticism. One common complaint was that it created a general "toxicity score" that wasn’t flexible enough to serve the varying needs of different platforms. Some Web sites, for instance, might require detection of threats but not profanity, while others might have the opposite requirements. Another issue was that the algorithm learned to conflate toxic comments with nontoxic comments that contained words related to gender, sexual orientation, religion or disability.
Following these concerns, the Conversation AI team invited developers to train their own toxicity-detection algorithms and enter them into three competitions hosted on Kaggle, a Google subsidiary known for its community of machine learning practitioners, public data sets and challenges. To help train the AI models, Conversation AI released two public data sets containing over one million toxic and non-toxic comments from Wikipedia and a service called Civil Comments. The comments were rated on toxicity by annotators, with a "Very Toxic" label indicating "a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective," and a "Toxic" label meaning "a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective."
Which of the following might be the best title for this text?
选项
A、How Machine-Learning Systems Work
B、How AI Learns to Detect Toxic Online Content
C、Human Exposure to Harmful Content
D、The Possibility of AI Replacing Manual Effort
答案
B
解析
由题干关键词the best,title可知这是主旨要义题,需要根据全文内容来确定答案。文章第一段说明了社交媒体平台遭受仇恨言论等问题的困扰及可能的解决方案:使用人工智能。第二段说明采用人工智能进行内容审核的原因。第三、四段说明了人工智能应用于内容审核的具体实例。可见全文主要围绕人工智能如何识别有害内容展开,故选项[B]“人工智能如何学会检测网上有害内容”符合条件,为本题答案。
转载请注明原文地址:https://jikaoti.com/ti/I21iFFFM
0
考研英语一
相关试题推荐
Peoplegenerallyassumethatwhentheyconsideranotherpersona"friend,"thatpersonalsothinksofthemasafriend,whichme
Everyonewantstobeauthentic.Youwanttobetruetoyourself,notaslavishfollowerofsocialexpectations.Youwantto"liv
AstudyinCyberpsychology,Behavior,andSocialNetworkingsuggeststhatartificialintelligenceholdsapromisingfutureinhe
Howseriouslyshouldparentstakekids’opinionswhensearchingforahome?Inchoosinganewhome,CamilleMcClain’skidsh
TheInternetaffordsanonymitytoitsusers,ablessingtoprivacyandfreedomofspeech.Butthatveryanonymityisalsobehind
TheInternetaffordsanonymitytoitsusers,ablessingtoprivacyandfreedomofspeech.Butthatveryanonymityisalsobehind
TheInternetaffordsanonymitytoitsusers,ablessingtoprivacyandfreedomofspeech.Butthatveryanonymityisalsobehind
ThequestionofhowAmericansspentand,crucially,savedmoneyoverthepasttwoyearsloomslargeovertheeconomytoday.Ins
ThequestionofhowAmericansspentand,crucially,savedmoneyoverthepasttwoyearsloomslargeovertheeconomytoday.Ins
随机试题
非那雄胺可以用于:
某要素既有位置公差要求,又有形状公差要求时,形状公差值应大于位置公差值。()
下列情形中属于医疗技术事故的是
100目筛相当于药典几号标准药筛
关于外用膏剂的基质对药物透皮吸收影响的叙述,正确的是()。
电网标称电压为35kV的谐波电压的电压总谐波畸变率限值为()。
Excellentnovelsarethosewhich______nationalandculturalbarriers.
我把中文学得很努力。(暨南大学2016)
极限
A、Theyshouldbeclearaboutwhattheyaregoingtodo.B、Theyshouldturntotheirparentsforhelp.C、Theyshouldignorethec
最新回复
(
0
)