Document Text Content
National Pub date: February 19, 2019
Title: DEEP THINKING
Subtitle: Twenty-Five Ways of Looking at AI
By: John Brockman
Length: 90,000 words
Headline: Science world luminary John Brockman assembles twenty-five of the most
important scientific minds, people who have been thinking about the field artificial
intelligence for most of their careers for an unparalleled round-table examination about
mind, thinking, intelligence and what it means to be human.
Description:
"Artificial intelligence is today's story—the story behind all other stories. It is the Second
Coming and the Apocalypse at the same time: Good AI versus evil AI." —John
Brockman
More than sixty years ago, mathematician-philosopher Norbert Wiener published a book
on the place of machines in society that ended with a warning: “we shall never receive
the right answers to our questions unless we ask the right questions…. The hour is very
late, and the choice of good and evil knocks at our door.”
In the wake of advances in unsupervised, self-improving machine learning, a small but
influential community of thinkers is considering Wiener’s words again. In Deep
Thinking, John Brockman gathers their disparate visions of where AI might be taking us.
The fruit of the long history of Brockman’s profound engagement with the most
important scientific minds who have been thinking about AI—from Alison Gopnik and
David Deutsch to Frank Wilczek and Stephen Wolfram— Deep Thinking is an ideal
introduction to the landscape of crucial issues AI presents.
The collision between opposing perspectives is salutary and exhilarating; some of these
figures, such as computer scientist Stuart Russell, Skype co-founder Jaan Tallinn, and
physicist Max Tegmark, are deeply concerned with the threat of AI, including the
existential one, while others, notably robotics entrepreneur Rodney Brooks, philosopher
Daniel Dennett, and bestselling author Steven Pinker, have a very different view. Serious,
searching and authoritative, Deep Thinking lays out the intellectual landscape of one of
the most important topics of our time.
Participants in The Deep Thinking Project
Chris Anderson is an entrepreneur; a roboticist; former editor-in-chief of Wired; cofounder
and CEO of 3DR; and author of The Long Tail, Free, and Makers.
Rodney Brooks is a computer scientist; Panasonic Professor of Robotics, emeritus, MIT;
former director, MIT Computer Science Lab; and founder, chairman, and CTO of
Rethink Robotics. He is the author of Flesh and Machines.
George M. Church is Robert Winthrop Professor of Genetics at Harvard Medical
School; Professor of Health Sciences and Technology, Harvard-MIT; and co-author (with
Ed Regis) of Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves.
Daniel C. Dennett is University Professor and Austin B. Fletcher Professor of
Philosophy and director of the Center for Cognitive Studies at Tufts University. He is the
author of a dozen books, including Consciousness Explained and, most recently, From
Bacteria to Bach and Back: The Evolution of Minds.
David Deutsch is a quantum physicist and a member of the Centre for Quantum
Computation at the Clarendon Laboratory, Oxford University. He is the author of The
Fabric of Reality and The Beginning of Infinity.
Anca Dragan is an assistant professor in the Department of Electrical Engineering and
Computer Sciences at UC Berkeley. She co-founded and serves on the steering
committee for the Berkeley AI Research (BAIR) Lab and is a co-principal investigator in
Berkeley’s Center for Human-Compatible AI.
George Dyson is a historian of science and technology and the author of Baidarka: the
Kayak, Darwin Among the Machines, Project Orion, and Turing’s Cathedral.
Peter Galison is a science historian, Joseph Pellegrino University Professor and cofounder
of
the Black Hole Initiative at Harvard University, and the author of Einstein's Clocks and
Poincaré’s Maps: Empires of Time.
Neil Gershenfeld is a physicist and director of MIT’s Center for Bits and Atoms. He is
the author of FAB, co-author (with Alan Gershenfeld & Joel Cutcher-Gershenfeld) of
Designing Reality, and founder of the global fab lab network.
Alison Gopnik is a developmental psychologist at UC Berkeley; her books include The
Philosophical Baby and, most recently, The Gardener and the Carpenter: What the New
Science of Child Development Tells Us About the Relationship Between Parents and
Children.
2
Tom Griffiths is Henry R. Luce Professor of Information, Technology, Consciousness,
and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms
to Live By.
W. Daniel “Danny” Hillis is an inventor, entrepreneur, and computer scientist, Judge
Widney Professor of Engineering and Medicine at USC, and author of The Pattern on the
Stone: The Simple Ideas That Make Computers Work.
Caroline A. Jones is a professor of art history in the Department of Architecture at MIT
and author of Eyesight Alone: Clement Greenberg’s Modernism and the
Bureaucratization of the Senses; Machine in the Studio: Constructing the Postwar
American Artist; and The Global Work of Art.
David Kaiser is Germeshausen Professor of the History of Science and professor of
physics at MIT, and head of its Program in Science, Technology & Society. He is the
author of How the Hippies Saved Physics: Science, Counterculture, and the Quantum
Revival and American Physics and the Cold War Bubble (forthcoming).
Seth Lloyd is a theoretical physicist at MIT, Nam P. Suh Professor in the Department of
Mechanical Engineering, and an external professor at the Santa Fe Institute. He is the
author of Programming the Universe: A Quantum Computer Scientist Takes on the
Cosmos.
Hans Ulrich Obrist is artistic director of the Serpentine Gallery, London, and the author
of Ways of Curating and Lives of the Artists, Lives of the Architects.
Judea Pearl is professor of computer science and director of the Cognitive Systems
Laboratory at UCLA. His most recent book, co-authored with Dana Mackenzie, is The
Book of Why: The
Alex “Sandy” Pentland is Toshiba Professor and professor of media arts and sciences,
MIT; director of the Human Dynamics and Connection Science labs and the Media Lab
Entrepreneurship Program, and the author of Social Physics.
New Science of Cause and Effect.
Steven Pinker, a Johnstone Family Professor in the Department of Psychology at
Harvard University, is an experimental psychologist who conducts research in visual
cognition, psycholinguistics, and social relations. He is the author of eleven books,
including The Blank Slate, The Better Angels of Our Nature, and, most recently,
Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.
Venki Ramakrishnan is a scientist at the Medical Research Council Laboratory of
Molecular Biology, Cambridge University; recipient of the Nobel Prize in Chemistry
(2009); current president of the Royal Society; and the author of Gene Machine: The
Race to Discover the Secrets of the Ribosome.
3
Stuart Russell is a professor of computer science and Smith-Zadeh Professor in
Engineering at UC Berkeley. He is the coauthor (with Peter Norvig) of Artificial
Intelligence: A Modern Approach.
Jaan Tallin, a computer programmer, theoretical physicist, and investor, is a codeveloper
of Skype and Kazaa.
Max Tegmark is an MIT physicist and AI researcher; president of the Future of Life
Institute; scientific director of the Foundational Questions Institute; and the author of Our
Mathematical Universe and Life 3.0: Being Human in the Age of Artificial Intelligence.
Frank Wilczek is Herman Feshbach Professor of Physics at MIT, recipient of the 2004
Nobel Prize in physics, and the author of A Beautiful Question: Finding Nature’s Deep
Design.
Stephen Wolfram is a scientist, inventor, and the founder and CEO of Wolfram
Research. He is the creator of the symbolic computation program Mathematica and its
programming language, Wolfram Language, as well as the knowledge engine
Wolfram|Alpha. He is also the author of A New Kind of Science.
4
Deep Thinking
Twenty-five Ways of Looking at AI
edited by John Brockma
Penguin Press — February 19, 2019
5
Table of Contents
Acknowledgments
Introduction: On the Promise and Peril of AI
by John Brockman
Seth Lloyd: Wrong, but More Relevant Than Ever
It is exactly in the extension of the cybernetic idea to human beings that Wiener’s
conceptions missed their target.
Judea Pearl: The Limitations of Opaque Learning Machines
Deep learning has its own dynamics, it does its own repair and its own optimization, and
it gives you the right results most of the time. But when it doesn’t, you don’t have a clue
about what went wrong and what should be fixed.
Stuart Russell: The Purpose Put Into the Machine
We may face the prospect of superintelligent machines—their actions by definition
unpredictable by us and their imperfectly specified objectives conflicting with our own—
whose motivation to preserve their existence in order to achieve those objectives may be
insuperable.
George Dyson: The Third Law
Any system simple enough to be understandable will not be complicated enough to
behave intelligently, while any system complicated enough to behave intelligently will be
too complicated to understand.
Daniel C. Dennett: What Can We Do?
We don’t need artificial conscious agents. We need intelligent tools.
Rodney Brooks: The Inhuman Mess Our Machines Have Gotten Us Into
We are in a much more complex situation today than Wiener foresaw, and I am worried
that it is much more pernicious than even his worst imagined fears.
Frank Wilczek: The Unity of Intelligence
The advantages of artificial over natural intelligence appear permanent, while the
advantages of natural over artificial intelligence, though substantial at present, appear
transient.
Max Tegmark: Let’s Aspire to More Than Making Ourselves Obsolete
We should analyze what could go wrong with AI to ensure that it goes right.
Jaan Tallinn: Dissident Messages
Continued progress in AI can precipitate a change of cosmic proportions—a runaway
process that will likely kill everyone.
6
Steven Pinker: Tech Prophecy and the Underappreciated Causal Power of Ideas
There is no law of complex systems that says that intelligent agents must turn into
ruthless megalomaniacs.
David Deutsch: Beyond Reward and Punishment
Misconceptions about human thinking and human origins are causing corresponding
misconceptions about AGI and how it might be created.
Tom Griffiths: The Artificial Use of Human Beings
Automated intelligent systems that will make good inferences about what people want
must have good generative models for human behavior.
Anca Dragan: Putting the Human into the AI Equation
In the real world, an AI must interact with people and reason about them. People will
have to formally enter the AI problem definition somewhere.
Chris Anderson: Gradient Descent
Just because AI systems sometimes end up in local minima, don’t conclude that this
makes them any less like life. Humans—indeed, probably all life-forms—are often stuck
in local minima.
David Kaiser: “Information” for Wiener, for Shannon, and for Us
Many of the central arguments in The Human Use of Human Beings seem closer to the
19th century than the 21st. Wiener seems not to have fully embraced Shannon’s notion of
information as consisting of irreducible, meaning-free bits.
Neil Gershenfeld: Scaling
Although machine making and machine thinking might appear to be unrelated trends,
they lie in each other’s futures.
W. Daniel Hillis: The First Machine Intelligences
Hybrid superintelligences such as nation states and corporations have their own
emergent goals and their actions are not always aligned to the interests of the people
who created them.
Venki Ramakrishnan: Will Computers Become Our Overlords?
Our fears about AI reflect the belief that our intelligence is what makes us special.
Alex “Sandy” Pentland: The Human Strategy
How can we make a good human-artificial ecosystem, something that’s not a machine
society but a cyberculture in which we can all live as humans—a culture with a human
feel to it?
7
Hans Ulrich Obrist: Making the Invisible Visible: Art Meets AI
Many contemporary artists are articulating various doubts about the promises of AI and
reminding us not to associate the term “artificial intelligence” solely with positive
outcomes.
Alison Gopnik: AIs versus Four-Year-Olds
Looking at what children do may give programmers useful hints about directions for
computer learning.
Peter Galison: Algorists Dream of Objectivity
By now, the legal, ethical, formal, and economic dimensions of algorithms are all quasiinfinite.
George M. Church: The Rights of Machines
Probably we should be less concerned about us-versus-them and more concerned about
the rights of all sentients in the face of an emerging unprecedented diversity of minds.
Caroline A. Jones: The Artistic Use of Cybernetic Beings
The work of cybernetically inclined artists concerns the emergent behaviors of life that
elude AI in its current condition.
Stephen Wolfram: Artificial Intelligence and the Future of Civilization
The most dramatic discontinuity will surely be when we achieve effective human
immortality. Whether this will be achieved biologically or digitally isn’t clear, but
inevitably it will be achieved.
8
Introduction: On the Promise and Peril of AI
John Brockman
Artificial intelligence is today’s story—the story behind all other stories. It is the Second
Coming and the Apocalypse at the same time: Good AI versus evil AI. This book comes
out of an ongoing conversation with a number of important thinkers, both in the world of
AI and beyond it, about what AI is and what it means. Called the Deep Thinking Project,
this conversation began in earnest in September 2016, in a meeting at the Mayflower
Grace Hotel in Washington, Connecticut with some of the book’s contributors.
What quickly emerged from that first meeting is that the excitement and fear in the wider
culture surrounding AI now has an analogue in the way Norbert Wiener’s ideas regarding
“cybernetics” worked their way through the culture, particularly in the 1960’s, as artists
began to incorporate thinking about new technologies into their work. I witnessed the
impact of those ideas at close hand; indeed it’s not too much to say they set me off on my
life’s path. With the advent of the digital era beginning in the early 1970s, people stopped
talking about Wiener, but today, his Cybernetic Idea has been so widely adopted that it’s
internalized to the point where it no longer needs a name. It’s everywhere, it’s in the air,
and it’s a fitting a place to begin.
New Technologies=New Perceptions
Before AI, there was Cybernetics—the idea of automatic, self-regulating control, laid out
in Norbert Wiener’s foundational text of 1948. I can date my own serious exposure to it
to 1966, when the composer John Cage invited me and four or five other young arts
people to join him for a series of dinners—an ongoing seminar about media,
communications, art, music, and philosophy that focused on Cage’s interest in the ideas
of Wiener, Claude Shannon, and Marshall McLuhan, all of whom had currency in the
New York art circles in which I was then moving. In particular, Cage had picked up on
McLuhan’s idea that by inventing electronic technologies we had externalized our central
nervous system—that is, our minds—and that we now had to presume that “there’s only
one mind, the one we all share.”
Ideas of this nature were beginning to be of great interest to the artists I was
working with in New York at the Film-Makers’ Cinémathèque, where I was program
manager for a series of multimedia productions called the New Cinema 1 (also known as
the Expanded Cinema Festival), under the auspices of avant-garde filmmaker and
impresario Jonas Mekas. They included visual artists Claes Oldenburg, Robert
Rauschenberg, Andy Warhol, Robert Whitman; kinetic artists Charlotte Moorman and
Nam June Paik; happenings artists Allan Kaprow and Carolee Schneemann; dancer Tricia
Brown; filmmakers Jack Smith, Stan Vanderbeek, Ed Emshwiller, and the Kuchar
brothers; avant-garde dramatist Ken Dewey; poet Gerd Stern and the USCO group;
minimalist musicians Lamonte Young and Terry Riley; and through Warhol, the music
group, The Velvet Underground. Many of these people were reading Wiener, and
cybernetics was in the air. It was at one of these dinners that Cage reached into his
briefcase and took out a copy of Cybernetics and handed it to me, saying, “This is for
you.”
9
During the Festival, I received an unexpected phone call from Wiener’s colleague
Arthur K. Solomon, head of Harvard’s graduate program in biophysics. Wiener had died
the year before, and Solomon and Wiener’s other close colleagues at MIT and Harvard
had been reading about the Expanded Cinema Festival in the New York Times and were
intrigued by the connection to Wiener’s work. Solomon invited me to bring some of the
artists up to Cambridge to meet with him and a group that included MIT sensorycommunications
researcher Walter Rosenblith, Harvard applied mathematician Anthony
Oettinger, and MIT engineer Harold “Doc” Edgerton, inventor of the strobe light.
Like many other “art meets science” situations I’ve been involved in since, the
two-day event was an informed failure: ships passing in the night. But I took it all
onboard and the event was consequential in some interesting ways—one of which came
from the fact that they took us to see “the” computer. Computers were a rarity back then;
at least, none of us on the visit had ever seen one. We were ushered into a large space on
the MIT campus, in the middle of which there was a “cold room” raised off the floor and
enclosed in glass, in which technicians wearing white lab coats, scarves, and gloves were
busy collating punch cards coming through an enormous machine. When I approached,
the steam from my breath fogged up the window into the cold room. Wiping it off, I saw
“the” computer. I fell in love.
Later, in the Fall of 1967, I went to Menlo Park to spend time with Stewart Brand,
whom I had met in New York in 1965 when he was a satellite member of the USCO
group of artists. Now, with his wife Lois, a mathematician, he was preparing the first
edition of The Whole Earth Catalog for publication. While Lois and the team did the
heavy lifting on the final mechanicals for WEC, Stewart and I sat together in a corner for
two days, reading, underlining, and annotating the same paperback copy of Cybernetics
that Cage had handed to me the year before, and debating Wiener’s ideas.
Inspired by this set of ideas, I began to develop a theme, a mantra of sorts, that
has informed my endeavors since: “new technologies = new perceptions.” Inspired by
communications theorist Marshall McLuhan, architect-designer Buckminster Fuller,
futurist John McHale, and cultural anthropologists Edward T. (Ned) Hall and Edmund
Carpenter, I started reading avidly in the field of information theory, cybernetics, and
systems theory. McLuhan suggested I read biologist J.Z. Young’s Doubt and Certainty
in Science in which he said that we create tools and we mold ourselves through our use of
them. The other text he recommended was Warren Weaver and Claude Shannon’s 1949
paper “Recent Contributions to the Mathematical Theory of Communication,” which
begins: “The word communication will be used here in a very broad sense to include all
of the procedures by which one mind may affect another. This, of course, involves not
only written and oral speech, but also music, the pictorial arts, the theater, the ballet, and
in fact all human behavior."
Who knew that within two decades of that moment we would begin to recognize
the brain as a computer? And in the next two decades, as we built our computers into the
Internet, that we would begin to realize that the brain is not a computer, but a network of
computers? Certainly not Wiener, a specialist in analogue feedback circuits designed to
control machines, nor the artists, nor, least of all, myself.
“We must cease to kiss the whip that lashes us.”
10
Two years after Cybernetics, in 1950, Norbert Wiener published The Human Use of
Human Beings—a deeper story, in which he expressed his concerns about the runaway
commercial exploitation and other unforeseen consequences of the new technologies of
control. I didn’t read The Human Use of Human Beings until the spring of 2016, when I
picked up my copy, a first edition, which was sitting in my library next to Cybernetics.
What shocked me was the realization of just how prescient Wiener was in 1950 about
what’s going on today. Although the first edition was a major bestseller—and, indeed,
jump-started an important conversation—under pressure from his peers Wiener brought
out a revised and milder edition in 1954, from which the original concluding chapter,
“Voices of Rigidity,” is conspicuously absent.
Science historian George Dyson points out that in this long-forgotten first edition,
Wiener predicted the possibility of a “threatening new Fascism dependent on the machine
à gouverner”:
No elite escaped his criticism, from the Marxists and the Jesuits (“all of
Catholicism is indeed essentially a totalitarian religion”) to the FBI (“our great
merchant princes have looked upon the propaganda technique of the Russians,
and have found that it is good”) and the financiers lending their support “to make
American capitalism and the fifth freedom of the businessman supreme
throughout the world.” Scientists . . . received the same scrutiny given the
Church: “Indeed, the heads of great laboratories are very much like Bishops, with
their association with the powerful in all walks of life, and the dangers they incur
of the carnal sins of pride and of lust for power.”
This jeremiad did not go well for Wiener. As Dyson puts it:
These alarms were discounted at the time, not because Wiener was wrong about
digital computing but because larger threats were looming as he completed his
manuscript in the fall of 1949. Wiener had nothing against digital computing but
was strongly opposed to nuclear weapons and refused to join those who were
building digital computers to move forward on the thousand-times-more-powerful
hydrogen bomb.
Since the original of The Human Use of Human Beings is now out of print, lost to
us is Wiener’s cri de coeur, more relevant today than when he wrote it, sixty-eight years
ago: “We must cease to kiss the whip that lashes us.”
Mind, Thinking, Intelligence
Among the reasons we don’t hear much about “Cybernetics” today, two are central: First,
although The Human Use of Human Beings was considered an important book in its time,
it ran counter to the aspirations of many of Wiener’s colleagues, including John von
Neumann and Claude Shannon, who were interested in the commercialization of the new
technologies. Second, computer pioneer John McCarthy disliked Wiener and refused to
use Wiener’s term “Cybernetics.” McCarthy, in turn, coined the term “artificial
intelligence” and became a founding father of that field.
11
As Judea Pearl, who, in the 1980s, introduced a new approach to artificial
intelligence called Bayesian networks, explained to me:
What Wiener created was excitement to believe that one day we are going to
make an intelligent machine. He wasn't a computer scientist. He talked feedback,
he talked communication, he talked analog. His working metaphor was a
feedback circuit, which he was an expert in. By the time the digital age began in
the early 1960s people wanted to talk programming, talk codes, talk about
computational functions, talk about short-term memory, long-term memory—
meaningful computer metaphors. Wiener wasn’t part of that, and he didn’t reach
the new generation that germinated with his ideas. His metaphors were too old,
passé. There were new means already available that were ready to capture the
human imagination.” By 1970, people were no longer talking about Wiener.
One critical factor missing in Wiener’s vision was the cognitive element: mind, thinking,
intelligence. As early as 1942, at the first of a series of foundational interdisciplinary
meetings about the control of complex systems that would come to be known as the
Macy conferences, leading researchers were arguing for the inclusion of the cognitive
element into the conversation. While von Neumann, Shannon, and Wiener were
concerned about systems of control and communication of observed systems, Warren
McCullough wanted to include mind. He turned to cultural anthropologists Gregory
Bateson and Margaret Mead to make the connection to the social sciences. Bateson in
particular was increasingly talking about patterns and processes, or “the pattern that
connects.” He called for a new kind of systems ecology in which organisms and the
environment in which they live are one in the same, and should be considered as a single
circuit. By the early 1970s the Cybernetics of observed systems—1 st order Cybernetics—
moved to the Cybernetics of observing systems—2 nd order Cybernetics—or “the
Cybernetics of Cybernetics”, as coined by Heinz von Foerster, who joined the Macy
conferences in the mid 1950s, and spearheaded the new movement.
Cybernetics, rather than disappearing, was becoming metabolized into everything,
so we no longer saw it as a separate, distinct new discipline. And there it remains, hiding
in plain sight.
“The Shtick of the Steins”
My own writing about these issues at the time was on the radar screen of the 2 nd order
Cybernetics crowd, including Heinz von Foerster as well as John Lilly and Alan Watts,
who were the co-organizers of something called "The AUM Conference," shorthand for
“The American University of Masters”, which took place in Big Sur in 1973, a gathering
of philosophers, psychologists, and scientists, each of whom asked to lecture on his own
work in terms of its relationship to the ideas of British mathematician G. Spencer Brown
presented in his book, Laws of Form.
I was a bit puzzled when I received an invitation—a very late invitation indeed—
which they explained was based on their interest in the ideas I presented in a book called
Afterwords, which were very much on their wavelength. I jumped at the opportunity, the
main reason being that the keynote speaker was none other than Richard Feynman. I love
12
to spend time with physicists, the reason being that they think about the universe, i.e.
everything. And no physicist was reputed to be articulate as Feynman. I couldn’t wait to
meet him. I accepted. That said, I am not a scientist, and I had never entertained the idea
of getting on a stage and delivering a “lecture” of any kind, least of all a commentary on
an obscure mathematical theory in front of a group identified as the world’s most
interesting thinkers. Only upon my arrival in Big Sur did I find out the reason for my
very late invitation. “When is Feynman’s talk?” I asked at the desk. “Oh, didn’t Alan
Watts tell you? Richard is ill and has been hospitalized. You’re his replacement. And, by
the way, what’s the title of your keynote lecture?”
I tried to make myself invisible for several days. Alan Watts, realizing that I was
avoiding the podium, woke me up one night with a 3am knock on the door of my room. I
opened the door to find him standing in front of me wearing a monk’s robe with a hood
that covering much of his face. His arms extended, he held a lantern in one hand, and a
magnum of scotch on the other.
“John”, he said in a deep voice with a rich aristocratic British accent, “you are a
phony.” “And, John”, he continued, I am a phony. But John, I am a real phony!”
The next day I gave my lecture, entitled "Einstein, Gertrude Stein, Wittgenstein,
and Frankenstein." Einstein: the revolution in 20 th century physics; Gertrude Stein: the
first writer who made integral to her work the idea of an indeterminate and discontinuous
universe. Words represented neither character nor activity: A rose is a rose is a rose, and
a universe is a universe is a universe.); Wittgenstein: the world as limits of language.
“The limits of my language mean the limits of my world”. The end of the distinction
between observer and observed. Frankenstein: Cybernetics AI, robotics, all the essayists
in this volume.
The lecture had unanticipated consequences. Among the participants at the AUM
Conference were several authors of #1 New York Times bestsellers, yet no one there had
a literary agent. And I realized that all were engaged in writing a genre of book both
unnamed and unrecognized by New York publishers. Since I had an MBA from
Colombia Business School, and a series of relative successes in business, I was
dragooned into becoming an agent, initially for Gregory Bateson and John Lilly, whose
books I sold quickly, and for sums that caught my attention, thus kick-starting my career
as a literary agent.
I never did meet Richard Feynman.
The Long AI Winters
This new career put me in close touch with most of the AI pioneers, and over the decades
I rode with them on waves of enthusiasm, and into valleys of disappointment.
In the early ‘80s the Japanese government mounted a national effort to advance
AI. They called it the 5 th Generation; their goal was to change the architecture of
computation by breaking “the von Neumann bottleneck”, by creating a massively parallel
computer. In so doing, they hoped to jumpstart their economy and become a dominant
world power in the field. In1983, the leader of the Japanese 5 th Generation consortium
came to New York for a meeting organized by Heinz Pagels, the president of the New
York Academy of Sciences. I had a seat at the table alongside the leaders of the 1 st
generation, Marvin Minsky and John McCarthy, the 2 nd generation, Edward Feigenbaum
13
and Roger Schank, and Joseph Traub, head of the National Supercomputer Consortium.
In 1981 with Heinz’s help, I had founded “The Reality Club” (the precursor to the
non-profit Edge.org), whose initial interdisciplinary meetings took place in the Board
Room at the NYAS. Heinz was working on his book, Dreams of Reason: The Rise of the
Science of Complexity, which he considered to be a research agenda for science in the
1990's.
Through the Reality Club meetings, I got to know two young researchers who
were about to play key roles in revolutionizing computer science. At MIT in the late
seventies, Danny Hillis developed the algorithms that made possible the massively
parallel computer. In 1983, his company, Thinking Machines, built the world's fastest
supercomputer by utilizing parallel architecture. His "connection machine," closely
reflected the workings of the human mind. Seth Lloyd at Rockefeller University was
undertaking seminal work in the fields of quantum computation and quantum
communications, including proposing the first technologically feasible design for a
quantum computer.
And the Japanese? Their foray into artificial intelligence failed, and was followed
by twenty years of anemic economic growth. But, the leading US scientists took this
program very seriously. And Feigenbaum, who was the cutting-edge computer scientist
of the day, teamed up with McCorduck to write a book on these developments. The Fifth
Generation: Artificial Intelligence and Japan's Computer Challenge to the World was
published in 1983. We had a code name for the project: “It’s coming, it’s coming!” But it
didn’t come; it went.
From that point on I’ve worked with researchers in nearly every variety of AI and
complexity, including Rodney Brooks, Hans Moravec, John Archibald Wheeler, Benoit
Mandelbrot, John Henry Holland, Danny Hillis, Freeman Dyson, Chris Langton, Doyne
Farmer, Geoffrey West, Stuart Russell, and Judea Pearl.
An Ongoing Dynamical Emergent System
From the initial meeting in Washington, CT to the present, I arranged a number of
dinners and discussions in London and Cambridge, Massachusetts, as well as a public
event at London’s City Hall. Among the attendees were distinguished scientists, science
historians, and communications theorists, all of whom have been thinking seriously about
AI issues for their entire careers.
I commissioned essays from a wide range of contributors, with or without
references to Wiener (leaving it up to each participant). In the end, 25 people wrote
essays, all individuals concerned about what is happening today in the age of AI. Deep
Thinking in not my book, rather it is our book: Seth Lloyd, Judea Pearl, Stuart Russell,
George Dyson, Daniel C. Dennett, Rodney Brooks, Frank Wilczek, Max Tegmark, Jaan
Tallinn, Steven Pinker, David Deutsch, Tom Griffiths, Anca Dragan, Chris Anderson,
David Kaiser, Neil Gershenfeld, W. Daniel Hillis, Venki Ramakrishnan, Alex “Sandy”
Pentland, Hans Ulrich Obrist, Alison Gopnik, Peter Galison, George M. Church, Caroline
A. Jones, Stephen Wolfram.
I see The Deep Thinking Project as an ongoing dynamical emergent system, a
presentation of the ideas of a community of sophisticated thinkers who are bringing their
experience and erudition to bear in challenging the prevailing digital AI narrative as they
14
communicate their thoughts to one another. The aim is to present a mosaic of views
which will help make sense out of this rapidly emerging field.
I asked the essayists to consider:
(a) The Zen-like poem “Thirteen Ways of Looking at a Blackbird,” by Wallace
Stevens, which he insisted was “not meant to be a collection of epigrams or of ideas, but
of sensations.” It is an exercise in “perspectivism,” consisting of short, separate sections,
each of which mentions blackbirds in some way. The poem is about his own imagination;
it concerns what he attends to.
(b) The parable of the blind men and an elephant. Like the elephant, AI is too big
a topic for any one perspective, never mind the fact that no two people seem to see things
the same way.
What do we want the book to do? Stewart Brand has noted that “revisiting
pioneer thinking is perpetually useful. And it gives a long perspective that invites
thinking in decades and centuries about the subject. All contemporary discussion, is
bound to age badly and immediately without the longer perspective.”
Danny Hillis wants people in AI to realize how they’ve been programmed by
Wiener’s book. “You’re executing its road map,” he says, and you just don’t realize it.”
Dan Dennett would like to “let Wiener emerge as the ghost at the banquet. Think
of it as a source of hybrid vigor, a source of unsettling ideas to shake uŒp the established
mindset.”
Neil Gershenfeld argues that “stealth remedial education for the people running
the “Big Five” would be a great output from the book.”
Freeman Dyson Freeman, one of the few people alive who knew Wiener, notes
that “The Human Use of Human Beings is one of the best books ever written. Wiener got
almost everything right. I will be interested to see what your bunch of wizards will do
with it.”