This year marks exactly two countries since the publication of
Frankenstein; or, The Modern Prometheus, by Mary Shelley. Even before the
invention of the electric light bulb, the author produced a remarkable work of
speculative fiction that would foreshadow many ethical questions to be raised by
technologies yet to come.
Today the rapid growth of artificial intelligence (AI) raises fundamental
questions:”What is intelligence, identify, or
consciousness? What makes humans humans?”
What is being called artificial general intelligence, machines that would
imitate the way humans think, continues to evade scientists. Yet humans remain
fascinated by the idea of robots that would look, move, and respond like humans,
similar to those recently depicted on popular sci-fi TV series such as
“Westworld” and “Humans”.
Just how people think is still far too complex to be understood, let alone
reproduced, says David Eagleman, a Stanford University neuroscientist. “We are
just in a situation where there are no good theories explaining what
consciousnesss actually is and how you could ever build a machine to get
But that doesn’t mean crucial ethical issues involving AI aren’t at hand.
The coming use of autonomous vehicles, for example, poses thorny ethical
questions. Human drivers sometimes must make split-second decisions. Their
reactions may be a complex combination of instant reflexes, input from past
driving experiences, and what their eyes and ears tell them in that moment. AI
“vision” today is not nearly as sophisticated as that of humans. And to
anticipate every imaginable driving situation is a difficult programming
Whenever decisions are based on masses of data, “you quickly get into a lot
of ethical questions,” notes Tan Kiat How, chief executive of a Singapore-based
agency that is helping the government develop a voluntary code for the ethical
use of AI. Along with Singapore, other governments and mega-corporations are
beginning to establish their own guidelines. Britain is setting up a data ethics
center. India released its AI ethics strategy this spring.
On June 7 Google pledged not to “design or deploy AI” that would cause
“overall harm,” or to develop AI-directed weapons or use AI for surveillance
that would violate international norms. It also pledged not to deploy AI whose
use would violate international laws or human rights.
While the statement is vague, it represents one starting point. So does the
idea that decisions made by AI systems should be explainable, transparent, and
To put it another way: How can we make sure that the thinking of
intelligent machines reflects humanity’s highest values? Only then will they be
useful servants and not Frankenstein’s out-of-control monster.
31. Mary Shelley’s novel Frankenstein is mentioned because it
A. fascinates AI scientists all over the world.
B. has remained popular for as long as 200 years.
C. involves some concerns raised by AI today.
D. has sparked serious ethical controversies.
32. In David Eagleman’s opinion, our current knowledge of consciousness
A. helps explain artificial intelligence.
B. can be misleading to robot making.
C. inspires popular sci-fi TV series.
D. is too limited for us to reproduce it.
33. The solution to the ethical issues brought by autonomous vehicles
A. can hardly ever be found.
B. is still beyond our capacity.
C. causes little public concern.
D. has aroused much curiosity.
34. The author’s attitude toward Google’s pledge is one of
35. Which of the following would be the best title for the text?
A. AI’s Future: In the Hands of Tech Giants
B. Frankenstein, the Novel Predicting the Age of AI
C. The Conscience of AI: Complex But Inevitable
D. AI Shall Be Killers Once Out of Control