There is an old joke among pilots that says the ideal flight crew is a computer, a pilot and a dog. The computer's job is to fly the plane. The pilot is there to feed the dog. And the dog's job is to bite the pilot if he tries to touch the computer.
Handling complex tasks to computers is not new. But recent progress in machine learning, a subfield of artificial intelligence (AI), has enabled computers to handle many problems which were previously beyond them. The result has been an AI boom, with computers moving into everything from medical diagnosis and insurance to self-driving cars.
There is a problem, though. Machine learning works by giving computers the ability to train themselves, which adapts their programming to the task at hand. People struggle to understand exactly how those self-written programs do what they do. When algorithms(运算法)are doing small tasks, such as playing chess or recommending a film to watch, this “black box” problem can be safely ignored. When they are deciding who gets a loan, or how to drive a car through a crowded city, it is potentially harmful. And when things go wrong, then customers, regulators(调控者)and the courts will want to know why.
For some people this is a reason to hold back AI. France's digital-economy minister, Mounir Mabjoubi, has said that the government should not use any algorithm whose decisions cannot be explained. But that is an overreaction. The difficulties caused by clever computers are not unheard of. Society already has plenty of experience dealing with problematic black boxes; the most common are called human beings, who can't even figure out what's going on in a brain. In response to the weakness in humans, society has evolved a series of workable coping mechanisms(应对机制), called laws, rules and regulations. Many of these can be applied to machines as well.
Humans have worked with computers on complex tasks for decades. One lesson from such applications is that, wherever possible, people should supervise(监督)the machines. In that joke, pilots are necessary in case something happens that is beyond the scope of artificial intelligence. As computers spread, companies and governments should ensure the first line of defence is a real person who can overrule the algorithms if necessary.
AI is bound to suffer some troubles. But it also promises extraordinary benefits and the difficulties it poses are not unprecedented(无先例的). If the new black boxes prove tricky, there will be time to toughen the rules.
【小题1】Why is the joke mentioned in Paragraph 1?A.To make people laugh. |
B.To predict future airplane industry. |
C.To introduce the latest model of airplanes. |
D.To raise people's awareness of computers performing complex tasks. |
A.The shortage of labor force. | B.The rise of artificial intelligence. |
C.The encouragement of governments. | D.The development of economy |
A.Black boxes always go wrong |
B.AI may become too smart to be controlled by human beings. |
C.They don't understand how AI works. |
D.Computers are trying to replace human beings. |
A.Stop panic and welcome AI. | B.Reduce the usage of algorithms. |
C.Supervise AI wherever possible. | D.Deal with the "black box” problem. |
A.Supportive. | B.Indifferent. | C.Pessimistic. | D.Neutral. |