Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment a

admin2022-11-01  49

问题     Social platforms large and small are struggling to keep their communities safe from hate speech, extremist content, harassment and misinformation. Most recently, far-right agitators posted openly about plans to storm the U.S. Capitol before doing just that on January 6. One solution might be AI: developing algorithms to detect and alert us to toxic and inflammatory comments and flag them for removal. But such systems face big challenges.
    The prevalence of hateful or offensive language online has been growing rapidly in recent years, and the problem is now rampant. In some cases, toxic comments online have even resulted in real life violence, from religious nationalism in Myanmar to neo-Nazi propaganda in the U.S. Social media platforms, relying on thousands of human reviewers, are struggling to moderate the ever-increasing volume of harmful content. In 2019, it was reported that Facebook moderators are at risk of suffering from PTSD as a result of repeated exposure to such distressing content. Outsourcing this work to machine learning can help manage the rising volumes of harmful content, while limiting human exposure to it. Indeed, many tech giants have been incorporating algorithms into their content moderation for years.
    One such example is Google’s Jigsaw, a company focusing on making the internet safer. In 2017, it helped create Conversation AI, a collaborative research project aiming to detect toxic comments online. However, a tool produced by that project, called Perspective, faced substantial criticism. One common complaint was that it created a general "toxicity score" that wasn’t flexible enough to serve the varying needs of different platforms. Some Web sites, for instance, might require detection of threats but not profanity, while others might have the opposite requirements. Another issue was that the algorithm learned to conflate toxic comments with nontoxic comments that contained words related to gender, sexual orientation, religion or disability.
    Following these concerns, the Conversation AI team invited developers to train their own toxicity-detection algorithms and enter them into three competitions hosted on Kaggle, a Google subsidiary known for its community of machine learning practitioners, public data sets and challenges. To help train the AI models, Conversation AI released two public data sets containing over one million toxic and non-toxic comments from Wikipedia and a service called Civil Comments. The comments were rated on toxicity by annotators, with a "Very Toxic" label indicating "a very hateful, aggressive, or disrespectful comment that is very likely to make you leave a discussion or give up on sharing your perspective," and a "Toxic" label meaning "a rude, disrespectful, or unreasonable comment that is somewhat likely to make you leave a discussion or give up on sharing your perspective."
Which of the following might be the best title for this text?

选项 A、How Machine-Learning Systems Work
B、How AI Learns to Detect Toxic Online Content
C、Human Exposure to Harmful Content
D、The Possibility of AI Replacing Manual Effort

答案B

解析 由题干关键词the best,title可知这是主旨要义题,需要根据全文内容来确定答案。文章第一段说明了社交媒体平台遭受仇恨言论等问题的困扰及可能的解决方案:使用人工智能。第二段说明采用人工智能进行内容审核的原因。第三、四段说明了人工智能应用于内容审核的具体实例。可见全文主要围绕人工智能如何识别有害内容展开,故选项[B]“人工智能如何学会检测网上有害内容”符合条件,为本题答案。
转载请注明原文地址:https://jikaoti.com/ti/I21iFFFM
0

最新回复(0)