Henry Kissinger published an article in the June 2018 Atlantic Monthly detailing his belief artificial intelligence (AI) threate

admin2021-02-21  61

问题    Henry Kissinger published an article in the June 2018 Atlantic Monthly detailing his belief artificial intelligence (AI) threatens to be a problem for humanity—probably an existential problem.
   He joins Elon Musk, Bill Gates, Stephen Hawking and others who have come out to declare the dangers of AI. The difference is, unlike those scientists and technologists, the former secretary of State speaks with great authority to a wider audience that includes policy makers and political leaders, and so could have a much greater influence.
   And that’s not a good thing. There’s a widespread lack of precision in how we describe AI that is giving rise to a significant apprehension on its use in self-driving cars, automated farms, drone airplanes and many other areas where it could be extremely useful. In particular, Kissinger commits the same error many people do when talking about AI: the so-called conflation error. In this case the error comes about when the success of AI programs in defeating humans in games such as chess and go are conflated with similar successes that might be achieved with AI programs used in supply chain management or claims adjustments or other, more futuristic areas.
   But the two situations are very different. The rules of games like chess and go are prescriptive, somewhat complicated and never change. They are, in the context of AI, "well bounded." A book teaching chess or go written 100 years ago is still relevant today. Training an AI to play one of these games takes advantage of this "boundedness" in a variety of interesting ways, including letting the AI decide how it will play.
   Now, however, imagine the rules of chess could change randomly at any time in any location: Chess on Tuesdays in Chicago has one set of rules but in Moscow there are a different set of rules on Thursdays. Chess players in Mexico use a completely different board, one for each month of the year. In Sweden the role for each piece can be decided by a player even after the game starts. In a situation like this it’s obviously impossible to write down a single set rules that everyone can follow at all times in all locations.
   AI is today being applied to business systems like claims and supply chains that, by their very nature, are unbounded. It is impossible to write down all the rules an AI has to follow when adjudicating an insurance claim or managing the supply chain, even for something as simple as bubblegum. The only way to train an AI to manage one of these is to feed it massive amounts of data on all the myriad processes and companies that make up an insurance claim or a simple supply chain. We then hope the AI can do the job—not just efficiently, but also ethically.
People worried about AI’s use in some important fields because______.

选项 A、the previous failures
B、the common conflation error
C、the complicated processes
D、the imprecise description of AI

答案D

解析 事实细节题。根据定位词定位到文章第三段。题干中的worried about为apprehension的同义替换,fields为areas的同义替换。原文指出,我们描述人工智能的方式普遍缺乏精确性,这引起了人们对于人工智能在自动驾驶汽车、自动化农场、无人驾驶飞机以及许多其他领域的使用的严重担忧,故D项为正确选项。
转载请注明原文地址:https://jikaoti.com/ti/4NpRFFFM
0

最新回复(0)