Jump to content

Possible Minds: 25 Ways of Looking at AI

From Slow Like Wiki

Introduction: On the Promise and Peril of AI

  • Before AI, there was cybernetics - the idea of automatic, self-regulating control, laid out in Norbert Wiener's foundational text of 1948.
  • John Cage had picked up on McLuhan's idea that by inventing electronic technologies we had externalized our central nervous system - that is, our minds - and that we now had to presume that "there's only one mind, the one we all share."
  • JZ Young "Doubt and Certainty in Science" - We create tools and we mold ourselves through our use of them.
  • Warren Weaver and Claude Shannon "Recent Contributions to the Mathematical Theory of Communication" - "The word communication will be used here in a very broad sense to include all of the procedures by which one minds may affect another. This, of course, involves not only written and oral speech, but also music, the pictorial arts, the theater, the ballet, and in fact all human behavior."
  • John McCarty disliked Wiener and refused to use the term "cybernetics", creating the term "artificial intelligence" instead.
  • While Von Neumann, Shannon, and Wiener were concerned about systems of control and communication of observed systems, Warren McCullough wanted to include mind. He turned to cultural anthropologists Gregory Bateson and Margaret Mead to make the connection to the social sciences.
  • Bateson, in particular, was increasingly talking about patterns and processes, or "the pattern that connects." He called for a new kind of systems ecology in which organisms and the environment in which they live are one and the same and should be considered as a single circuit.
  • By the early 1970s the cybernetics of observed systems - first order cybernetics - move to the cybernetics of observing systems - second order cybernetics, or "the Cybernetics of Cybernetics".
  • Cybernetics, rather than disappearing, was becoming metabolized into everything, so we no longer saw it as a separate, distinct new discipline. And there it remains, hiding in plain sight.
  • "Einstein, Gertrude Stein, Wittgenstein, and Frankenstein":
    • Einstein: The revolution in 20th C physics
    • Gertrude Stein: The first writer who made integral to her work the idea of an indeterminate and discontinuous universe. Words represented neither character nor activity.
    • Wittgenstein: "The limits of my language mean the limits of my world." - the end of the distinction between observer and observed.
    • Frankenstein: Cybernetics, AI, robotics.
  • Wallace Stevens "Thirteen Ways of Looking at a Blackbird" - not meant to be a collection of epigrams or of ideas, but of sensations. An exercise in perspectivism, consisting of short, separate sections, each of which mentions blackbirds in some way. The poem is about his own imagination; it concerns what he attends to.
  • He knew the danger was not machines beoming more like humans, but humans being treated like machines.

Seth Lloyd: Wrong But More Relevant Than Ever

  • Wiener's central insight was that the world should be understood in terms of information. Complex systems, such as organisms, brains, and human societies, consist of interlocking feedback loops in which signals exchanged between subsystems result in complex but stable behaviors. When feedback loops break down, the system goes unstable. He constructed a compelling picture of how complex biological systems function, a picture that is by and large universally accepted today.

Judea Pearl: The Limitations of Opaque Learning Machines

  • What humans had that other species lacked was a mental representation of their environment - a representation that they could manipulate at will to imagine alternative hypothetical environments for planning and learning:
    • L1: Statistical reasoning - What can a symptom tell you about a disease?
    • L2: Actions - What will happen if...?
    • L3: Counterfactual - What if...?

Stuart Russell: The Purpose Put into the Machine

  • 1001 (bad) reasons to pay no attention:
    • We can just switch it off
    • Human level or superhuman AI is impossible
    • It's too soon to worry about it
    • Human-level AI isn't really imminent, in any case
    • You're just a luddite
    • Any machine intelligent enough to cause trouble will be intelligent enough to have appropriate and altruistic objectives (but, see Mars Attacks!)
    • Intelligence is multi-dimensional, so "smarter than humans" is a meaningless concept.
  • A robot that's uncertain about human preferences actually benefits from being switched off, because it understand that the human will press the off switch to prevent the robot from doing something counter to those preferences. Thus the robot is incentivized to preserve the off switch, and this incentive derives directly from its uncertainty about human preferences.

George Dyson: The Third Law

  • The history of computing can be divided into an Old Testament and a New Testament: before and after electronic digital computers and the codes they spawned proliferated across the earth:
    • The OT prophets, who delivered the underlying logic, included Thomas Hobbes and Leibniz.
    • The NT prophets delivered the machines:
      • Turing - What would it take for machines to become intelligent?
      • von Neumann - What would it take for them to reproduce?
      • Shannon - How could they communicate reliably?
      • Wiener - How long would it take for them to assume control?
  • There is no precise distinction between analog and digital computing:
    • In general, digital computing deals with integers, binary sequences, deterministic logic, and time that is idealized into discrete increments. Intolerant of error or ambiguity, it depends upon error correction at every step along the way
    • Analog computing deals with real numbers, nondeterministic logic, and continuous functions, including time as it exists as a continuum in the real world. Complexity reside in network topology, not in code. Information is precessed as continuous functions of values such as voltage and relative pulse frequency rather than by logical operations on discrete strings of bits. It tolerates errors, allowing you to live with them.
  • Nature uses digital coding for the storage, replication, and recombination of sequences of nucleotides, but relies on analog computing running on nervous systems, for intelligence and control. The genetic system in every living cell is a stored-program computer. Brains aren't.
  • Analog computers also mediate transformations between two forms of information: structure in space and behavior in time. There is no code and no programming. Somehow - and we don't fully understand how - nature evolved analog computers known as nervous systems, which embody information absorbed from the world. They learn. One of the things they learn is control. They learn to control their own behavior, and they learn to control their environment to the extent that they can.
  • While we argue about the intelligence of digital computers, analog computing is quietly supervening upon the digital, in the same way that analog components like vacuum tubes were repurposed to build digital computers in the aftermath of World War II. Individually deterministic finite state processors, running finite codes, are forming large-scale nondeterministic, non-finite-state metazoan organisms running wild in the real world.
  • The resulting hybrid analog/digital systems treat streams of bits collectively, the way the flow of electrons is treated in a vacuum tune, rather than individually, as bits are treated by the discrete-state devices generating the flow. Bits are the new electrons. Analog is back and its nature is to assume control.
  • What if you wanted to build a machine to capture what everything known to the human species means? With Moore's Law behind you, it doesn't take too long to digitize all the information in the world. You scan every book ever printed, collect every email ever written, and gather forty-nine years of video every 24 hours, while tracking where people are and what they do, in real time. But how do you capture meaning?
  • Three laws of AI:
    • Ashby's law: Any effective control system must be as complex as the system it controls.
    • Second law (von Neumann) - The defining characteristic of a complex system is that it constitutes its own simplest behavioral description. The simplest complete model of an organism is the organism itself.
    • Third law - Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

Daniel Dennett: What Can We Do

Rodney Brooks: The Inhuman Mess Our Machines Have Gotten Us Into

Frank Welczek: The Unity of Intelligence

Max Tegmark: Let's Aspire to More Than Making Ourselves Obsolete

Jaan Tallinn: Dissident Messages

Steven Pinker: Tech Prophecy and the Underappreciated Causal Power of Ideas

David Deutsch: Beyond Reward and Punishment

Tom Griffiths: The Artificial Use of Human Beings

Anca Dragan: Putting the Human into the AI Equation

Chris Anderson: Gradient Descent

David Kaiser: "Information" for Wiener, for Shannon, and for Us

Neil Gershenfeld: Scaling

W Daniel Hillis: The First Machine Intelligences

Venki Ramakrishnan: Will Computers Become Our Overlords?

Alex "Sandy" Pentland: The Human Strategy

Hans Ulrich Obrist: Making the Invisible Visible: Art Meets AI

Alison Gopnik: AIs vs Four-year-Olds

Peter Galison: Algorists Dream of Objectivity

George M Church: The Rights of Machines

Caroline A Jones: The Artistic Use of Cybernetic Beings

Stephen Wolfram: Artificial Intelligence and the Future of Civilization