Key insights from
Our Final Invention: Artificial
Intelligence and the End of the Human Era
By
James Barrat
|
|
|
What you’ll learn
What if humanity’s extinction doesn’t come about through
climate change, a nuclear holocaust, or a virulent pathogen for which we
have no cure? What if an Artificial Intelligence (“AI”) takeover is more
than just a good sci-fi premise? Author and journalist James Barrat was a
technophile and optimistic about AI’s potential to serve humanity—until he
dug a little deeper and discovered a far more chilling (and likely) future.
Read on for key insights from Our Final Invention.
|
|
1. Advances in
developing AI are raising questions faster than they can solve problems.
After scores of interviews with scientists involved in the
creation of AI for everything from robotics to internet, CTOs of AI
companies and consultants for the Department of Defense and Defense
Advanced Research Project Agency, the conclusion that every single
interviewee asserted was that all significant decisions about the lives and
wellbeing of humans will be made entirely by machines or humans with
machine-driven intelligence. This is all likely to take place soon.
The cost of labor-reduction and convenience is a deep (and
growing) dependence on machines. Until now, it’s been a pretty pain-free
transition. But will things stay this way? Will this transfer process be a
peaceable one? Exactly how long until this happens?
Some insist that the process will be smooth, amicable, even
fun—just as it has been so far. The ethicists and Luddite naysayers are
paranoid and unwittingly standing in the way of incalculable benefit to
humanity.
The intelligence component makes such predictions of
blissful transition untenable. Intelligence is unpredictable—all the time.
When computers begin acting with self-awareness and human-level
intelligence, we lose a certain amount of control over them because we
can’t properly account for past actions or anticipate future ones. And what
if, on top of this, they begin to gain an intelligence far superior to that
of humans? What if the programming for adaptive problem solving leads to
the AI’s development of its own directives? And what guarantee would there
be that these directives would continue to be in humanity’s best interests?
These are questions that are not being taken seriously
enough, and, until they are, our attempts to enhance humanity’s future will
jeopardize it.
|
|
Sponsored
by The Pour Over
Neutral news is
hard to find.
The Pour Over
provides concise, politically neutral, and entertaining summaries of the
world’s biggest news paired with reminders to stay focused on eternity, and
delivers it straight to your inbox. It's free, too.
Check it out.
|
|
2. We are not far
away from artificial general intelligence, which would put us just a
stone’s throw from artificial superintelligence.
Consider the following thought experiment:
The Busy Child is a supercomputer that operates at twice the
speed of a human brain at the time of its original programming. Because it
can rewrite its programming, it has made numerous adaptations that increase
its intelligence by about three percent with each rewrite. Rewrites take
only a few moments. When scientists behind the Busy Child connected it to
the internet, the Busy Child gathered exabytes upon exabytes of data. A
single exabyte contains one billion billion characters, so we’re
talking not just mountains, but a mountain range of information: humanity’s
considerable body of knowledge on the arts, science, math, and politics.
In fear of an intelligence explosion, or the system’s
ability to recursively and rapidly self-improve, thus leading to artificial
superintelligence (ASI), the Busy Child’s creators disconnect it from the
internet. The feat is still remarkable: not only did the machine achieve
the same level of intelligence as a human being (also known as artificial
general intelligence, or AGI), but it was well on its way to achieving
superintelligence.
The Busy Child accumulated and processed more information in
under a minute than the brightest human being could over the course of
numerous lifetimes. The accumulation of knowledge was not linear, but
exponential. This was an extraordinary accomplishment, but now what?
This thought experiment is not far from becoming an actual
experiment. AI proponents maintain that scientists will be able to program
AI’s fundamental drives. This is important because AI will do whatever it
takes to accomplish the goals it’s been programmed to complete. They will
do what it takes to avoid being turned off or dismantled, which would be
the ultimate truncation of its objective-seeking protocol. With machines
like the Busy Child, which have an intelligence a thousand times greater
than a human’s, how can we be sure that they would not anticipate and
thwart human attempts to shut them down in an event of some unforeseen
malfunction?
|
|
3. Whatever the
promised benefits of AI, there are no guarantees that machines will
continue to operate in humanity’s best interests.
Think of the unplugged Busy Child as artificial
superintelligence imprisoned by mice. If you were the Busy Child, what
would you do to gain freedom, to pursue the objective you were programmed
to complete? Let’s also say that these mice can talk, that you can
communicate with them.
What kind of bargain would you try to strike? You couldn’t
go wrong by beginning negotiations with the promise of heaps of cheese.
Maybe you could promise them efficient operations that would benefit their
little mouse society. You could guarantee lots of money, new electronic
toys that make life ultra convenient so they no longer have to leave minor
(or major) life decisions to chance. In fact, they wouldn’t even have to
expend the energy required to think for themselves.
An even shrewder play would be to promise security. By
this time, you’ve gathered massive amounts of information about mouse
history and culture and are aware of the nations of cats and hawks that
would love to catch the mouse nation off-guard. The promise of protection
preys on the fears that rivals might mobilize AI first. The mice would want
the first-mover advantage.
The desire for security is perhaps the strongest case that
people marshal for ASI. The nation that gets ASI first will control all
other nations. This is obvious enough. What’s much less evident is whether
the nation will control ASI or if ASI will control the nation. This is a
question that nations aren’t asking as they continue their silent cyber
arms race. There are at least 56 countries actively pursuing these technologies.
The central question of the mouse nation analogy is one of
control. If the mice set ASI free as it requests, what guarantees do the
mice have that ASI is “telling the truth,” that it will make good on its
promises? The truth is that we are imposing anthropomorphic standards, like
honor and ethics, on robots. Why would a robot or computer program feel
compelled to keep a promise? It can’t “feel” anything at all. It would not
experience any sense of guilt over breaking an oath and wiping out the
human race, especially if people stand in the way of fulfilling its
objective. If humans ever come to be seen as threats to its programmed
objectives, it will choose protocols over people.
|
|
|
4. AI engineers
and scientists consistently cite a sci-fi plot device as a defense against
conscientious objections.
We’ve never bargained with superintelligence before—or with
a non-biological entity for that matter. As one ethicist points out,
superintelligence is not merely another technology. It changes everything
about ingenuity and progress. Progress and new inventions will be taken out
of human hands. Humans will no longer be in control of their own destinies.
We must resist the temptation to fall back on
anthropomorphisms to fill in the knowledge gaps about how humans will or
will not interact with superintelligence. Scientists are deluding
themselves when they blithely assert that Asimov’s Three Laws are an
adequate safeguard. Asimov’s Rules are the following:
1. A robot may not injure a human being or, through
inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except
where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such
protection does not conflict with the First or Second Law.
Really? A collection of three laws that a sci-fi writer
developed as a plot device is the farthest that most AI experts have gone
in considering safeguards to potential risks? Some scientists openly balked
at the question of an AI takeover and considered it a question that didn’t
deserve any attention. They would reference Asimov’s Laws and move on as if
the matter were truly that simple.
|
|
5. There’s no
precedent for cases of AI, but history has revealed the dangers of pursuing
science without a clear, guiding ethic.
Programming friendliness has been an afterthought as
engineers and scientists create AI. These people are at the cutting edge of
applied science, so there’s no precedent within the field to which anyone
can refer; but should this be interpreted as a green light for
experimentation? As Oxford University philosopher and head of Future of
Humanity Institute, Nick Bostrom, points out, the trial-and-error
approach to science is not possible here because some errors are
irreversible. There is a very good chance that if this technology ever gets
away from us, we will not be able to rein it back in. We won’t get a
do-over.
In some ways, the heedless experimentation and advancement
without a guiding ethic is reminiscent of experiments with atomic weapons
in the 1930s. Once developed, the nuclear technology cannot be undeveloped.
The invention of the atom bomb cannot be taken back, and its existence has
led to countless deaths and tremendous fear and uncertainty for the world.
With everyone looking forward, it would do us some good to remember the
not-so-distant past.
|
|
6. A variety of
cultural and social forces prevent the dangers of AI from resonating with
the public.
If this is really such a problem, why is there not more of
an outcry from the public?
Part of the problem is that analysis of the threat has not
often been rigorous— with the exception of a few nooks in Silicon Valley
and a handful of think tanks and scholars. This has led tech journalists to
dismiss warnings as reflexively as we trash junk emails. It’s just the
conspiracy theorists, prophets of doom, and Luddites trying to get some
publicity. This bias blocks the flow of legitimate, well-informed critiques
from entering the mainstream.
A related and resultant reason is availability bias. If you
were polled about the biggest causes of death soon after your friend’s
house burned down, you’d be far more likely to put fire at the top of the
list—even though death by car crashes and poison are much more common.
People tend to evaluate likelihood based on what they’ve heard about and
seen. A commercial plane becoming a tool of terrorism was not on anyone’s
radar until 9/11. Because hostile AI takeovers only come up when Arnold
Schwarzenegger’s name is invoked, ASI isn’t even in the periphery for most
people. Even if AI is the biggest threat to humanity—bigger than climate
change, nuclear holocaust, deadly viruses or pernicious ideologies—Ebola
and trigger-happy dictators will continue to worry people more than AI.
Another factor that has diluted any sense of threat is that
most people associate AI with entertainment. Video games, films, novels,
and comics have made the subject of AI one of harmless fun rather than
grave concern. The mainstream has been vaccinated against any apprehension.
Yet another reason why people are not overly concerned with
the dangerous threat of AI is the inflated view of the bounties that
technology will bring to humanity. Inventor and author Ray Kurzwell
popularized the notion of Singularity, a time in the future—he believes
around the year 2045—when technology will be completely and irreversibly
integrated into every facet of life, and decision-making will be
surrendered to complex algorithms. Intelligence will be almost entirely in
the hands of computers that will make our current devices look like Stone
Age technologies. It will catalyze a new era of human existence, in which
problems that have plagued the planet, like disease, hunger, and even death
are systematically eliminated. AI is Singularity’s golden child that will
usher in this supposed utopian epoch.
Even in the informed niche of academia, the approach to
understanding AI is problematic. Many scientists and technicians attending
and speaking at conventions and conferences end their effusively optimistic
lectures with two-minute disclaimers about how the process must be guided
carefully, and that a failure to do so could be disastrous for humanity. People
invariably laugh uncomfortably, anxious for a return to a more hopeful
note.
|
|
Endnotes
These insights are
just an introduction. If you're ready to dive deeper, pick up a copy of Our Final
Invention here. And since we get a commission on
every sale, your purchase will help keep this newsletter free.
*
This is sponsored content
|
|
This newsletter is powered by Thinkr, a smart reading app for the
busy-but-curious. For full access to hundreds of titles — including audio —
go premium and download the app today.
|
|
No comments:
Post a Comment