首页
外语
计算机
考研
公务员
职业资格
财经
工程
司法
医学
专升本
自考
实用职业技能
登录
外语
As Artificial Intelligence (AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat
As Artificial Intelligence (AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat
admin
2022-07-13
23
问题
As Artificial Intelligence (AI) becomes increasingly sophisticated, there are growing concerns that robots could become a threat. This danger can be avoided, according to computer science professor Stuart Russell, if we figure out how to turn human values into a programmable code.
Russell argues that as robots take on more complicated tasks, it’s necessary to translate our morals into AI language.
For example, if a robot does chores around the house, you wouldn’t want it to put the pet cat in the oven to make dinner for the hungry children. "You would want that robot preloaded with a good set of values," said Russell.
Some robots are already programmed with basic human values. For example, mobile robots have been programmed to keep a comfortable distance from humans. Obviously there are cultural differences, but if you were talking to another person and they came up close in your personal space, you wouldn’t think that’s the kind of thing a properly brought-up person would to.
It will be possible to create more sophisticated moral machines, if only we can find a way to set out human values as clear rules.
Robots could also learn values from drawing patterns from large sets of data on human behavior. They are dangerous only if programmers are careless.
The biggest concern with robots going against human values is that human beings fail to do sufficient testing and they’ve produced a system that will break some kind of taboo (禁忌).
One simple check would be to program a robot to check the correct course of action with a human when presented with an unusual situation.
If the robot is unsure whether an animal is suitable for the microwave, it has the opportunity to stop, send out beeps (嘟嘟声), and ask for directions from a human. If we humans aren’t quite sure about a decision, we go and ask somebody else.
The most difficult step in programming values will be deciding exactly what we believe is moral, and how to create a set of ethical rules. But if we come up with an answer, robots could be good for humanity.
What would we think of a person who invades our personal space according to the author?
选项
A、They are aggressive.
B、They are outgoing.
C、They are ignorant.
D、They are ill-bred.
答案
D
解析
推理判断题。定位句提到,如果你正在和另一个人谈话,他向你靠近,进入了你的个人空间话,你肯定觉得有教养的人是不会做这样的事情的。由此可见,当一个人距离我们太近,侵犯到我们的私人空间时,我们会觉得这个人是没有教养的,故答案为D)。
转载请注明原文地址:https://jikaoti.com/ti/8pbiFFFM
0
大学英语四级
相关试题推荐
Wemaylookattheworldaroundus,butsomehowwemanagenottoseeituntilwhateverwe’vebecomeusedtosuddenlydisappears.
Howisthewoman’sperformanceatschool?
Thetaxidriverwasamaninhislatethirties.Hepickedmeupand【C1】________metomyplace.Iusuallyliketohavebrief【C2】_
Thetaxidriverwasamaninhislatethirties.Hepickedmeupand【C1】________metomyplace.Iusuallyliketohavebrief【C2】_
Whathappenedtothewoman’sflight?
Lookatthegraphic.WhichfactorydoesMr.Polkmostlikelymanage?
Whyisthemanworkinglatetonight?
HowtoChooseFlooringMaterialsSourceTherearesomeman-madematerialslike【L31】________Beforebeingused,material.underg
Onweekendstherearealotofchildrenplayinginthepark,________parentsseatedtogetherjoking.
Bytheendofthisyear,he________inthisfactoryforasmanyasthirtyyears.
随机试题
下列关于调查报告写作的说法,错误的是()
叩诊呈浊音的情况见于
根据法律规定,企业分立后,有关对企业分立前的债权债务承担责任是根据()。
A公司2015年和2016年与股权投资有关的资料如下:(1)A公司2015年1月2日以一组资产交换甲公司持有B公司60%的股权,并作为长期股权投资核算,取得股权后能够对B公司实施控制。A公司另支付资产评估和法律咨询等费用60万元。该组资产包括银行存款、库
有以下几个条件成立:(1)如果小王是工人,那么小张不是医生。(2)或者小李是工人,或者小王是工人。(3)如果小张不是医生,那么小赵不是学生。(4)或者小赵是学生,或者小周不是经理。以下()如果为真,可得出“小李是工人”的结论。
代理人在代理权限内,以行为人的名义实施民事法律行为。()
吉癖:强迫症的一种,即把正常卫生范同内的事物认为是肮脏的,感到焦虑,强迫性地清洗、检查及排斥“不洁”之物。下列不属于洁癖的是()。
法律责任方面的有关知识理论在实际生活中有直接的应用价值,准确地理解法律责任的有关知识理论是完全必要的。下列关于法律责任的说法,错误的是()。
BecauseBobhadstoppedreadinghistechnicaljournalshewas_________ofnewdevelopmentinhisfield.
A、TomorrowB、ThedayaftertomorrowC、Inafewdays.D、FridayA
最新回复
(
0
)