Possible Minds: 25 Ways of Looking at AI: Difference between revisions
Appearance
Line 19: | Line 19: | ||
== Seth Lloyd: Wrong But More Relevant Than Ever == | == Seth Lloyd: Wrong But More Relevant Than Ever == | ||
* Wiener's central insight was that the world should be understood in terms of information. Complex systems, such as organisms, brains, and human societies, consist of interlocking feedback loops in which signals exchanged between subsystems result in complex but stable behaviors. When feedback loops break down, the system goes unstable. He constructed a compelling picture of how complex biological systems function, a picture that is by and large universally accepted today. | |||
== Judea Pearl: The Limitations of Opaque Learning Machines == | == Judea Pearl: The Limitations of Opaque Learning Machines == | ||
* What humans had that other species lacked was a mental representation of their environment - a representation that they could manipulate at will to imagine alternative hypothetical environments for planning and learning: | |||
** L1: Statistical reasoning - What can a symptom tell you about a disease? | |||
** L2: Actions - What will happen if...? | |||
** L3: Counterfactual - What if...? | |||
== Stuart Russell: The Purpose Put into the Machine == | == Stuart Russell: The Purpose Put into the Machine == | ||
* 1001 (bad) reasons to pay no attention: | |||
** We can just switch it off | |||
** Human level or superhuman AI is impossible | |||
** It's too soon to worry about it | |||
** Human-level AI isn't really imminent, in any case | |||
** You're just a luddite | |||
** Any machine intelligent enough to cause trouble will be intelligent enough to have appropriate and altruistic objectives (but, see Mars Attacks!) | |||
** Intelligence is multi-dimensional, so "smarter than humans" is a meaningless concept. | |||
* A robot that's uncertain about human preferences actually benefits from being switched off, because it understand that the human will press the off switch to prevent the robot from doing something counter to those preferences. Thus the robot is incentivized to preserve the off switch, and this incentive derives directly from its uncertainty about human preferences. | |||
== George Dyson: The Third Law == | == George Dyson: The Third Law == | ||
== Daniel Dennett: What Can We Do == | == Daniel Dennett: What Can We Do == |
Revision as of 08:31, 6 July 2025
Introduction: On the Promise and Peril of AI
- Before AI, there was cybernetics - the idea of automatic, self-regulating control, laid out in Norbert Wiener's foundational text of 1948.
- John Cage had picked up on McLuhan's idea that by inventing electronic technologies we had externalized our central nervous system - that is, our minds - and that we now had to presume that "there's only one mind, the one we all share."
- JZ Young "Doubt and Certainty in Science" - We create tools and we mold ourselves through our use of them.
- Warren Weaver and Claude Shannon "Recent Contributions to the Mathematical Theory of Communication" - "The word communication will be used here in a very broad sense to include all of the procedures by which one minds may affect another. This, of course, involves not only written and oral speech, but also music, the pictorial arts, the theater, the ballet, and in fact all human behavior."
- John McCarty disliked Wiener and refused to use the term "cybernetics", creating the term "artificial intelligence" instead.
- While Von Neumann, Shannon, and Wiener were concerned about systems of control and communication of observed systems, Warren McCullough wanted to include mind. He turned to cultural anthropologists Gregory Bateson and Margaret Mead to make the connection to the social sciences.
- Bateson, in particular, was increasingly talking about patterns and processes, or "the pattern that connects." He called for a new kind of systems ecology in which organisms and the environment in which they live are one and the same and should be considered as a single circuit.
- By the early 1970s the cybernetics of observed systems - first order cybernetics - move to the cybernetics of observing systems - second order cybernetics, or "the Cybernetics of Cybernetics".
- Cybernetics, rather than disappearing, was becoming metabolized into everything, so we no longer saw it as a separate, distinct new discipline. And there it remains, hiding in plain sight.
- "Einstein, Gertrude Stein, Wittgenstein, and Frankenstein":
- Einstein: The revolution in 20th C physics
- Gertrude Stein: The first writer who made integral to her work the idea of an indeterminate and discontinuous universe. Words represented neither character nor activity.
- Wittgenstein: "The limits of my language mean the limits of my world." - the end of the distinction between observer and observed.
- Frankenstein: Cybernetics, AI, robotics.
- Wallace Stevens "Thirteen Ways of Looking at a Blackbird" - not meant to be a collection of epigrams or of ideas, but of sensations. An exercise in perspectivism, consisting of short, separate sections, each of which mentions blackbirds in some way. The poem is about his own imagination; it concerns what he attends to.
- He knew the danger was not machines beoming more like humans, but humans being treated like machines.
Seth Lloyd: Wrong But More Relevant Than Ever
- Wiener's central insight was that the world should be understood in terms of information. Complex systems, such as organisms, brains, and human societies, consist of interlocking feedback loops in which signals exchanged between subsystems result in complex but stable behaviors. When feedback loops break down, the system goes unstable. He constructed a compelling picture of how complex biological systems function, a picture that is by and large universally accepted today.
Judea Pearl: The Limitations of Opaque Learning Machines
- What humans had that other species lacked was a mental representation of their environment - a representation that they could manipulate at will to imagine alternative hypothetical environments for planning and learning:
- L1: Statistical reasoning - What can a symptom tell you about a disease?
- L2: Actions - What will happen if...?
- L3: Counterfactual - What if...?
Stuart Russell: The Purpose Put into the Machine
- 1001 (bad) reasons to pay no attention:
- We can just switch it off
- Human level or superhuman AI is impossible
- It's too soon to worry about it
- Human-level AI isn't really imminent, in any case
- You're just a luddite
- Any machine intelligent enough to cause trouble will be intelligent enough to have appropriate and altruistic objectives (but, see Mars Attacks!)
- Intelligence is multi-dimensional, so "smarter than humans" is a meaningless concept.
- A robot that's uncertain about human preferences actually benefits from being switched off, because it understand that the human will press the off switch to prevent the robot from doing something counter to those preferences. Thus the robot is incentivized to preserve the off switch, and this incentive derives directly from its uncertainty about human preferences.