Jump to content

Possible Minds: 25 Ways of Looking at AI

From Slow Like Wiki

Introduction: On the Promise and Peril of AI

  • Before AI, there was cybernetics - the idea of automatic, self-regulating control, laid out in Norbert Wiener's foundational text of 1948.
  • John Cage had picked up on McLuhan's idea that by inventing electronic technologies we had externalized our central nervous system - that is, our minds - and that we now had to presume that "there's only one mind, the one we all share."
  • JZ Young "Doubt and Certainty in Science" - We create tools and we mold ourselves through our use of them.
  • Warren Weaver and Claude Shannon "Recent Contributions to the Mathematical Theory of Communication" - "The word communication will be used here in a very broad sense to include all of the procedures by which one minds may affect another. This, of course, involves not only written and oral speech, but also music, the pictorial arts, the theater, the ballet, and in fact all human behavior."
  • John McCarty disliked Wiener and refused to use the term "cybernetics", creating the term "artificial intelligence" instead.
  • While Von Neumann, Shannon, and Wiener were concerned about systems of control and communication of observed systems, Warren McCullough wanted to include mind. He turned to cultural anthropologists Gregory Bateson and Margaret Mead to make the connection to the social sciences.
  • Bateson, in particular, was increasingly talking about patterns and processes, or "the pattern that connects." He called for a new kind of systems ecology in which organisms and the environment in which they live are one and the same and should be considered as a single circuit.
  • By the early 1970s the cybernetics of observed systems - first order cybernetics - move to the cybernetics of observing systems - second order cybernetics, or "the Cybernetics of Cybernetics".
  • Cybernetics, rather than disappearing, was becoming metabolized into everything, so we no longer saw it as a separate, distinct new discipline. And there it remains, hiding in plain sight.
  • "Einstein, Gertrude Stein, Wittgenstein, and Frankenstein":
    • Einstein: The revolution in 20th C physics
    • Gertrude Stein: The first writer who made integral to her work the idea of an indeterminate and discontinuous universe. Words represented neither character nor activity.
    • Wittgenstein: "The limits of my language mean the limits of my world." - the end of the distinction between observer and observed.
    • Frankenstein: Cybernetics, AI, robotics.
  • Wallace Stevens "Thirteen Ways of Looking at a Blackbird" - not meant to be a collection of epigrams or of ideas, but of sensations. An exercise in perspectivism, consisting of short, separate sections, each of which mentions blackbirds in some way. The poem is about his own imagination; it concerns what he attends to.
  • He knew the danger was not machines beoming more like humans, but humans being treated like machines.

Seth Lloyd: Wrong But More Relevant Than Ever

  • Wiener's central insight was that the world should be understood in terms of information. Complex systems, such as organisms, brains, and human societies, consist of interlocking feedback loops in which signals exchanged between subsystems result in complex but stable behaviors. When feedback loops break down, the system goes unstable. He constructed a compelling picture of how complex biological systems function, a picture that is by and large universally accepted today.

Judea Pearl: The Limitations of Opaque Learning Machines

  • What humans had that other species lacked was a mental representation of their environment - a representation that they could manipulate at will to imagine alternative hypothetical environments for planning and learning:
    • L1: Statistical reasoning - What can a symptom tell you about a disease?
    • L2: Actions - What will happen if...?
    • L3: Counterfactual - What if...?

Stuart Russell: The Purpose Put into the Machine

  • 1001 (bad) reasons to pay no attention:
    • We can just switch it off
    • Human level or superhuman AI is impossible
    • It's too soon to worry about it
    • Human-level AI isn't really imminent, in any case
    • You're just a luddite
    • Any machine intelligent enough to cause trouble will be intelligent enough to have appropriate and altruistic objectives (but, see Mars Attacks!)
    • Intelligence is multi-dimensional, so "smarter than humans" is a meaningless concept.
  • A robot that's uncertain about human preferences actually benefits from being switched off, because it understand that the human will press the off switch to prevent the robot from doing something counter to those preferences. Thus the robot is incentivized to preserve the off switch, and this incentive derives directly from its uncertainty about human preferences.

George Dyson: The Third Law

  • The history of computing can be divided into an Old Testament and a New Testament: before and after electronic digital computers and the codes they spawned proliferated across the earth:
    • The OT prophets, who delivered the underlying logic, included Thomas Hobbes and Leibniz.
    • The NT prophets delivered the machines:
      • Turing - What would it take for machines to become intelligent?
      • von Neumann - What would it take for them to reproduce?
      • Shannon - How could they communicate reliably?
      • Wiener - How long would it take for them to assume control?
  • There is no precise distinction between analog and digital computing:
    • In general, digital computing deals with integers, binary sequences, deterministic logic, and time that is idealized into discrete increments. Intolerant of error or ambiguity, it depends upon error correction at every step along the way
    • Analog computing deals with real numbers, nondeterministic logic, and continuous functions, including time as it exists as a continuum in the real world. Complexity reside in network topology, not in code. Information is precessed as continuous functions of values such as voltage and relative pulse frequency rather than by logical operations on discrete strings of bits. It tolerates errors, allowing you to live with them.
  • Nature uses digital coding for the storage, replication, and recombination of sequences of nucleotides, but relies on analog computing running on nervous systems, for intelligence and control. The genetic system in every living cell is a stored-program computer. Brains aren't.
  • Analog computers also mediate transformations between two forms of information: structure in space and behavior in time. There is no code and no programming. Somehow - and we don't fully understand how - nature evolved analog computers known as nervous systems, which embody information absorbed from the world. They learn. One of the things they learn is control. They learn to control their own behavior, and they learn to control their environment to the extent that they can.
  • While we argue about the intelligence of digital computers, analog computing is quietly supervening upon the digital, in the same way that analog components like vacuum tubes were repurposed to build digital computers in the aftermath of World War II. Individually deterministic finite state processors, running finite codes, are forming large-scale nondeterministic, non-finite-state metazoan organisms running wild in the real world.
  • The resulting hybrid analog/digital systems treat streams of bits collectively, the way the flow of electrons is treated in a vacuum tune, rather than individually, as bits are treated by the discrete-state devices generating the flow. Bits are the new electrons. Analog is back and its nature is to assume control.
  • What if you wanted to build a machine to capture what everything known to the human species means? With Moore's Law behind you, it doesn't take too long to digitize all the information in the world. You scan every book ever printed, collect every email ever written, and gather forty-nine years of video every 24 hours, while tracking where people are and what they do, in real time. But how do you capture meaning?
  • Three laws of AI:
    • Ashby's law: Any effective control system must be as complex as the system it controls.
    • Second law (von Neumann) - The defining characteristic of a complex system is that it constitutes its own simplest behavioral description. The simplest complete model of an organism is the organism itself.
    • Third law - Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

Daniel Dennett: What Can We Do

  • We should all make it a regular practice to reread books from our youth, where we are apt to discover clear previews of some of our own later "discoveries" and "inventions," along with a wealth of insights to which we were bound to be impervious until our minds had been torn and tattered, exercised and enlarged, by confrontations with life's problems.
  • Latter-day Cartesians cannot bear the thought that we - we ourselves - are self-perpetuating patterns of information-bearing matter, not "stuff that abides". Those patterns are remarkably resilient and self-restoring but at the same time protean, opportunistic, selfish exploiters of whatever new is available to harness in their quest for perpetuation. And here is where things get dicey, as Wiener recognized. When attractive opportunities abound, we are apt to be willing to pay a little and accept some small, even trivial, cost-of-doing-business for access to new powers. And pretty soon we become dependent on our new tools that we lose the ability to thrive without them. Options become obligatory.
  • Orwell's Ministry of Truth is now a practical possibility. AT techniques for creating all-but-undetectable forgeries of "recordings" of encounters are now becoming available, which will render obsolete the tools of investigation we have come to take for granted in the last 150 years. Will we simply abandon the brief Age of Photographic Evidence and return to the earlier world in which human memory and trust provided the gold standard, or will we develop techniques of defense and offense in the arms race of truth?
  • The information age is also the disinformation age.
  • IBM's Watson could be a useful core faculty for a general intelligent agent, but more like a cerebellum or an amygdala than a mind - at best a special-purpose subsystem that could play a big supporting role, but not remotely up to the task of framing purposes and plans and building insightfully on its conversational experiences?
  • Perhaps Turing's brilliant idea of an operational test has lured us into a trap: the quest to create at least an illusion of a real person behind the screen, bridging the "uncanny valley". The danger here is that ever since Turing posed his challenge to fool the judges, AI creators have attempted to paper over the valley with cutesy humanoid touches, Disneyfication effects that will enchant and disarm the uninitiated.
  • Human embellishments are false advertising - something to condemn, not applaud.
  • We don't need artificial conscious agents. There is a surfeit of natural conscious agents, enought to handle whatever tasks should be reserved for such special and privileged entities. We need intelligent tools. Tools do not have rights, and should not have feelings that could be hurt, or be able to respond with resentment to "abuses" rained on them by inept users.
  • We should be creating an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates, no personality (but all sorts of foibles and quirks that would no doubt be identified as the "personality" of the system): boxes of truths (if we're lucky) almost certainly contaminated with a scattering of falsehoods. It will be hard enough learning to live with them without distracting ourselves with fantasies about the Singularity in which these AIs will enslave us!

Rodney Brooks: The Inhuman Mess Our Machines Have Gotten Us Into

  • We over-anthropomorphize humans, who are after all mere machines!
  • Wiener was a giant of an earlier era and built on the tools developed since the time of Newton and Leibniz to describe and analyze continuous processes in the physical world.
  • In the first industrial revolution, energy from a steam engine or a waterwheel was used by human workers to replace their own energy. Instead of being a source of energy for physical work, people became modulators of how a large source of energy was used. But because steam engines and waterwheels had to be large to be an efficient use of capital, and because in the 18th C the only technology for spatial distribution of energy was mechanical and worked only at very short range, many workers needed to be crowded around the source of energy.
  • Wiener correctly argues that the ability to transmit energy as electricity caused a second industrial revolution. Now the source of energy could be distant from where it was used, and from the beginning of the 20th C, manufacturing could be much more dispersed as electrical distribution grids were built.

Frank Welczek: The Unity of Intelligence

  • Each new level of size and complexity supports new forms of organization, whose patterns encode information in new ways and whose behavior is best described using new concepts.
  • Today, smartphones and the internet are bringing the human drive toward augmentation into realms more central to our identity as intelligent beings. They are giving us, in effect, quick access to a vast collective awareness and a vast collective memory, and are:
    • A close to a billion times faster than brains
    • Small, with smallness making communication more efficient
    • More stable, capable of being restored with complete accuracy
    • Endure longer, with no need to eat or sleep (or die)
    • Modular, while the human nervous system actively resists implants
    • Quantum-ready
  • However, brains are:
    • 3D, unlike the 2D of computer boards and chips
    • Self-repairing, to an extent
    • More connective, with hundreds of connections per synapse compared to the handful of fixed connections of computer units
    • Self-assembling with interactive sculpting, tuned by the child's interactions with the external world
    • Integrated with sensors and actuators

Max Tegmark: Let's Aspire to More Than Making Ourselves Obsolete

  • Consciousness is the cosmic awakening: it transformed our universe from a mindless zombie with no self-awareness into a living ecosystem harboring self-reflection, beauty, hope, meaning, and purpose.
  • Intelligence is simply a certain kind of information processing performed by elementary particles moving around, and there's no law of physics that says one can't build machines more intelligent in every way than we are, and able to seed cosmic life.
  • First, we humans discovered how to replicate some natural processes with machines, making our own heat, light, and mechanical horsepower. Gradually we realized that our bodies were also machines, and the discovery of nerve cells blurred the boundary between body and mind. Finally, we stared building machines that could outperform no only our muscles but our minds as well. We've now been eclipse by machines in the performance of many narrow cognitive tasks, ranging from memorization and arithmetic to game play, and we are in the process of being overtaken in many more, from driving to investing to medical diagnosing. If the AI community succeeds in its original goal of building AGI then we will have, by definition, been eclipsed at all cognitive tasks.
  • Darwinian evolution endowed us with powerful fear of concrete threats, not of abstract threats from future technologies that are hard to visualize or even imagine.
  • Since intelligence is the ability to accomplish goals, a superintelligent AI is by definition much better at accomplishing its goals than we humans are at accomplishing ours, and will therefore prevail.

Jaan Tallinn: Dissident Messages

  • The only (meta)stable entities are messages - patterns of information that propagate through time, like waves propagating across the surface of a lake. Even we humans can be considered messages, because the atoms in our bodies are too fleeting to attach our identities to. Instead, we are the "message" that our bodily functions maintain. "It is the pattern maintained by this homeostasis, which is the touchstone of our personal identity.
  • In some very real sense, big corporations are nonhuman machines that pursue their own interests - interests that might not align with hos of any particular human working for them.

Steven Pinker: Tech Prophecy and the Underappreciated Causal Power of Ideas

  • Three abstract concepts that fit with the computational theory of mind:
    • Knowledge - Patterns in matter or energy that stand in systematic relations with states of the world, with mathematical and logical truths, and with one another.
    • Reasoning - Transformations of that knowledge by physical operations that are designed to preserve those relations.
    • Purpose - The control of operations to effect changes in the world, guided by discrepancies between its current state and a goal state.
  • The physical functioning of the living individual and the operation of some of the newer communication machines are precisely parallel in their analogous attempts to control entropy through feedback - the staving off of life-sapping entropy being the ultimate goal of human beings
  • Wiener also applied the ideas of cybernetics to society. The laws, norms, customs, media, forums, and institutions of a complex community could be considered channels of information propagation and feedback that allow a society to ward off disorder and pursue certain goals.
  • To say that ideas - and not just weather, resources, geography, or weaponry - can shape history is not woolly mysticism. It is a statement of the causal powers of information instantiated in human brains and exchanged in networks of communication and feedback. Deterministic theories of history, whether they identify the causal engine as technological, climatological, or geographic, are belied by the causal power of ideas.
  • A healthy society - one that gives its members the means to pursue life in defiance of entropy - allows information sensed and contributed by its members to feed back and affect how the society is governed. A dysfunctional society invokes dogma and authority to impose control from the top down.
  • Ideas, norms, and institutions are themselves a form of technology, consisting of patterns of information distributed across brains.
  • Intelligence is the ability to deploy novel means to attain a goal. But the goals are extraneous to the intelligence: Being smart is not the same as wanting something. Even if, in the brains of Homo sapiens, reasoning comes bundled with goals such as dominating rivals and amassing resources, there is no law that says that intelligent agents must turn into ruthless megalomaniacs.
  • Intelligence is a contraption of gadgets: software modules that acquire, or are programmed with, knowledge of how to pursue various goals in various domains:
    • People are equipped to find food, win friends and influence people, charm prospective mates, bring up children, move around in the world, and pursue other human obsessions and pastimes.
    • Computers may be programmed to take on some of these problems (like recognizing faces), not to bother with others (like charming mates), and to take on still other problems that humans can't solve (like simulating the climate or sorting millions of accounting records).
    • The problems are different, and the kinds of knowledge needed to solve them are different.

David Deutsch: Beyond Reward and Punishment

  • For most of our species' history, our ancestors were barely people. This was not due to any inadequacy in their brains. On the contrary, even before the emergence of our anatomically modern human sub-species, they were making things like clothes and campfires, using knowledge that was not in their genes. It was created in their brains by thinking, and preserved by individuals in each generation imitating their elders. Moreover, this must have been knowledge in the sense of understanding, because it is impossible to imitate novel complex behaviors like those without understanding what the component behaviors are for.
  • Creative criticism, interleaved with creative conjecture, is how humans learn one another's behaviors, including language, and extract meaning from one another's utterances. Those are also the processes by which all new knowledge is created: They are how we innovate, make progress, and create abstract understanding for its own sake. This is human-level intelligence: thinking. It is also, or should be, the property we seek in AGI.
  • In the past, progress happened only on timescales much longer than people's lifetimes, so in a typical generation no one benefited from any progress. Therefore the benefits of the ability to innovate can have exerted little or no evolutionary pressure during the biological evolution of the human brain. That evolution was driven by the benefits of preserving cultural knowledge.
  • From the earliest days of thinking, children must have been cornucopias of creative ideas and paragons of critical thought - otherwise, they could not have learned language or other complex culture.
  • By now, it is hard for us even to conceive of the kind of relentless, finely tuned oppression required to reliably extinguish, in everyone, the aspiration to progress and replace it with dread and revulsion at any novel behavior. In such a culture, there can have been no morality other than conformity and obedience, no other identity than one's status in a hierarchy, no mechanisms of cooperation other than punishment and reward. So everyone had the same aspiration in life: to avoid the punishments and get the rewards. In a typical generation, no one invented anything, because no one aspired to anything new, because everyone had already despaired of improvement being possible.
  • All the effort that has ever increased the capabilities of AIs has gone into narrowing their range of potential "thoughts".
  • An AGI is certainly capable of learning to win at chess - but also of choosing not to, or choosing to play interestingly instead of to win, or inventing a new game. A mere AI is incapable of having any ideas, because the capacity for considering them has been designed out of its constitution.
  • No doubt there will be AGI criminals and enemies of civilization, just as there are human ones. But there is no reason to suppose that they will choose to become enemies of civilization.
  • The moral component, the cultural component, the element of free will - all make the task of creting an AGI fundamentally different from any other programming task. It's much more akin to raising a child. It will have no specifiable functionality - no fixed, testable criterion for what shall be a successful output for a given input.
  • Human brains can think anything that AGIs can, subject only to limitations of speed or memory capacity, both of which can be equalized by technology:
  • TOM (Totally Obedient Moron) vs DATA (Disobedient Autonomous Thinking Application), but how does one test for disobedience?
  • A person's quest for understanding is a search problem, in an abstract space of ideas far too large to be searched exhaustively. Objectives are ideas like any others - created as part of the search and continually modified and improved.
  • To an AGI, the whole space of ideas must be open. It should not be knowable in advance what ideas the program can never contemplate. And the ideas that the program does contemplate must be chosen by the program itself, using methods, criteria, and objectives that are also the program's own.

Tom Griffiths: The Artificial Use of Human Beings

  • Inverse reinforcement learning is a statistical problem: We have some data - the behavior of an intelligent agent - and we want to evaluate various hypotheses about the rewards underlying that behavior. When faced with this question, a statistician thinks about the generative model behind the data: What data would we expect to be generated if the intelligent agent was motivated by a particular set of rewards? Equipped with the generative model, the statistician can then work backwards: What rewards would likely have caused the agent to behave in that particular way?
  • If you're trying to make inferences about the rewards that motivate human behavior, the generative model is really a theory of how people behave - how human minds work. Inferences about hidden causes behind the behavior of other people reflect a sophisticated model of human nature that we all carry around in our heads.
  • In any situation where the time you spend making a decision is costly - at the very least because it's time you could spend doing something else - the classic notion of rationality is no longer a good prescription for how one should behave.
  • To develop a more realistic model of rational behavior, we need to take into account the cost of computation. Real agents need to modulate the amount of time they spend thinking by the effect the extra thought has on the results of a decision.
  • This trade-off can be formalized, resulting in a model of rational behavior that AI researchers call "bounded optimality".
  • Rather than condemning the heuristics that people use as irrational, we can think of them as a rational response to constraints on computation.
  • Making sense of toddlers requires building a cognitive model of the mind of a toddler. Superintelligent AI systems face the same challenge when trying to make sense of human behavior.

Anca Dragan: Putting the Human into the AI Equation

  • Achieving action compatibility will require robots to anticipate human actions, account for how those actions will influence their own, and enable people to anticipate robot actions. Research has made a degree of progress in meeting these challenges, but we still have a long way to go.
  • A seemingly better paradigm might be for robots to optimize for what we internally want, even if we have trouble explicating it. They would use what we say and do as evidence about what we want, rather than interpreting it literally and taking it as a given. When we write down a reward function, the robot should understand that we might be wrong: that we might not have considered all facets of the task; that there's no guarantee that said reward function will always lead to the behavior we want. The robot should integrate what we wrote down into its understanding of what we want, but it should also have a back-and-forth with us to elicit clarifying information. It should seek our guidance, because that's the only way to optimize the true desired reward function.

Chris Anderson: Gradient Descent

  • A gradient describes any gradual transition from one level to another one - as opposed to a step function, which describes a discrete change.
  • Every action in the universe is driven by some gradient drive, from the movement of the planets around gravity gradients to the joining of atoms along electric-charge gradients to form molecules. Our own urges, such as hunger and sleepiness, are driven by electrochemical gradients in our bodies. And our brain's functions, the electrical signals moving along ion channels in the synapses between our neurons, are simply atoms and electrons flowing "downhill" along yet more electrical and chemical gradients. Forget clockwork analogies; our brains are closer to a system of canals and locks, with signals traveling like water from one state to another.
  • The essence of intelligence is learning, and we do that by correlating inputs with positive or negative scores (rewards or punishments).

David Kaiser: "Information" for Wiener, for Shannon, and for Us

  • Shannon's key insight was that the information of a message was just like the entropy of a gas: a measure of the system's disorder.

Neil Gershenfeld: Scaling

  • Cycles of AI development, all of which do the same thing, making inferences from observations:
    • Mainframes were going to automate away work
    • Expert systems were going to codify and then replace the knowledge of experts
    • Perceptrons were going to model how the brain works
    • Mulitlayer perceptrons couldn't handle real-world problems
    • Deep learning is delivering on many early AI promises but in a way that is hard to understand, with consequences ranging from intellectual to existential threats.
  • Shannon wrote his thesis about how electrical circuits could be designed to evaluate arbitrary logical expressions, introducing the bases for universal digital logic.
  • By communicating with symbols rather than continuous quantities, the behavior is very different. The exponential decrease in communication errors made possible an exponential increase in the capacity of communication networks. And that eventually solved the problem of where the knowledge in an AI system came from.
  • A linear increase in network complexity led to an exponential increase in the expressive power of the network.
  • The curse of dimensionality - the solution came in using information about the problem to constrain the search. The search algorithms themselves are not new. But when applied to a deep-learning network, they adaptively build up representations of where to search. The price of this is that it's no longer possible to exactly solve for the best answer to a problem, but typically all that's needed is an answer that's good enough.
  • Both brains and computer chips are hard to understand by watching their inner workings; they're easily interpreted only by observing their external interfaces. We come to trust (or not) brains and computer chips alike based on experience that tests them rather than on explanations for how they work.
  • Many branches of engineering are making a transition from what's called imperative to declarative or generative design.
  • The mother of all design problems is the one that resulted in us. The way we're designed resides in one of the oldest and most conserved parts of the genome, called the Hox genes. These are genes that regulate genes, in what are called developmental programs. Nothing in your genome stores the design of your body; your genome stores, rather, a series of steps to follow that results in your body. This is an exact parallel to how search is done in AI. There are too many possible body plans to search over, and most modifications would be either inconsequential or fatal. The Hox genes are a representation of a productive place for evolutionary search. It's a kind of natural intelligence at the molecular level.
  • What's interesting about amino acids is that they're not interesting. They have attributes that are typical but not unusual, such as attracting or repelling water. But just 20 types of them are enough to make you. In the same way, 20 or so types of digital-material part types - conducting, insulating, rigid, flexible, magnetic, etc - are enough to assemble the range of functions that go into making modern technologies like robots and computers.
  • Wiener, Turing, and von Neumann all wrote about self-reproducing systems.
  • Making an assembler that can assemble itself from the parts that it's assembling is a focus of my lab, along with collaborations to develop synthetic cells.
  • It was an advance for:
    • Atoms to form molecules
    • Molecules to form organelles
    • Organelles to form cells
    • Cells to form organs
    • Organs to form organisms
    • Organisms to form families
    • Families to form societies
    • Societes to form civilizations
  • This grand evolutionary loop can now be closed, with atoms arranging bits arranging atoms.

W Daniel Hillis: The First Machine Intelligences

  • Corporations and bureaus, in some sense the first AIs, are not just made of humans, but are rather hybrids of humans and the IT that allow them to coordinate. In Wiener's time, they could not operate without telephones, telegraphs, radios, and tabulating machines? Now, they need networks of computers, databases, and decision support systems.
  • They are technologically augmented networks of humans with superhuman powers. They can know more than individual humans, they can sense more, they can make more complicated analyses and more complex plans. They can have vastly more resources and power than any single individual.
  • Hybrid superintelligences like nation-states and corporations have their own emergent goals, and their actions are not always aligned with the interests of the people who created them.
  • The neurons that carry their corporate thought are not just the human employees of the technologies that connect them; they are also coded into the policies, incentive structures, culture, and procedural habits of the corporation. The emergent corporate goals do not alway reflect the values of the people who implement them.
  • We are close to being able to build superintelligences out of pure IT without human components.
  • Countries had more of less unquestioned authority when most of the actors were humans who spent their lives within a single nation-state, but now that the actors of importance include geographically distributed hybrid intelligences such as multinational corporations, that logic is less obvious.
  • Today we live in a complex transitional period, when distributed superintelligences still largely rely on the nation-states to settle the arguments arising among them. Often, those arguments are resolved differently in different jurisdictions.
  • It is becoming more difficult even to assign individual humans to nations states. International travelers living and working outside their native countries, refugees, and immigrants (documented or not) are still dealt with as awkward exceptions. Superintelligences built purely of IT will prove even more awkward for the territorial system of authority, since there is no reason why they need to be tied to physical resources at all. An AI might well exist in the cloud rather than at any physical location.
  • Four possible scenarios:
    • State AIs - American vs Chinese AIs wrestling each other for resources on behalf of their state.
    • Corporate AIs - With corporations becoming more powerful and autonomous than nation-states.
    • Self-Interested AIs - Which act solely in their own interests, with humans viewed as ants or as simply irrelevant
    • Altruistic AIs - Working to further the goals of humanity as a whole.
  • Cybernetics is the study of how the weak can control the strong.
  • For Wiener, information is a way for the weak to effectively cope with the strong. Bateson defines information as a difference that makes a difference.
  • Inherent in the notion of a control system is the concept of amplification, which makes the small big and the weak strong.
  • First law of cybernetics - to control a system completely, the controller must be as complex as the controlled.

Venki Ramakrishnan: Will Computers Become Our Overlords?

-

Alex "Sandy" Pentland: The Human Strategy

  • You don't have to watch AI; instead you should watch what it eats and what it does.
  • Regulators, bureaucracies, and part of the government are very much like AIs: They take in the rules that we call law and regulation, and they add government data, and they make decisions that affect our lives.
  • If we have the data that go into and out of each decision, we can easily ask, Is this a fair algorithm? Is this AI doing things that we as humans believe are ethical? This human-in-the-loop approach is called "open algorithms"; you get to see what the AIs take as input and what they decide using that input. If you see those two things, you'll know whether they're doing the right thing or the wrong thing. It turns out that's not hard to do. If you control the data, then you control the AI.

Hans Ulrich Obrist: Making the Invisible Visible: Art Meets AI

-

Alison Gopnik: AIs vs Four-year-Olds

  • I think babies and children are actually more conscious than we are as adults. They're very good at taking in lots on information from lots of different sources at once. They are the R&D division of the human species.
  • The most sophisticated AIs are still far from being able to solve problems that human four-year-olds can accomplish with ease.
  • We learn an enormous amount even when we are small children; four-year-olds know about plants and animals and machines, desires, beliefs, and emotions, even dinosaurs and spaceships.
  • Science has extended our knowledge about the world to the unimaginably large and infinitesimally small, to the edge of the universe and the beginning of time. But all that reaches any of us from the world is a stream of photons hitting our retinas and disturbances of air at our ear drums. How do we learn so much about the world when the evidence we have is so limited? And how do we do all this with the few pounds of grey goo that sits behind our eyes?
  • The best answer so far is that our brains perform computations on the concrete, particular, messy data arriving at our senses, and those computations yield accurate representations of the world. The representations seem to be structured, abstract, and hierarchical; they include the perception of 3D objects, the grammars that underlie language, and mental capacities like theory of mind, which lets us understand what other people think. Those representations allow us to make a wide range of new predications and imagine many new possibilities in a distinctively creative human way.
  • Although children are dramatically bad at planning and decision making, they are the best learners in the universe. Much of the process of turning data into theories happens before we are five.
  • Two approaches for learning:
    • Aristotle - Bottom-up. Start with the senses. See also Hume, Mill, Pavlov and Skinner
    • Plato - Top-down. See also rationalists like Descartes and Chomsky.
  • Working out if an email is spam, I started out with a generative model that includes abstract concepts like greed and deception and describes the process that produces email scams. That lets me recognize the classic Nigerian email spam, but it also lets me imagine many different kind os possible spam. When I get the journal email, I can work backward: This seems like just the kind of mail that would come out of a spam-generating process.
  • Computers need millions of examples to categorize objects that we can categorize with just a few. These bottom-up systems can generalize to new examples, they can label a new image as a cat fairly accurately over all. But they do so in ways quite different from how humans generalize. Some images almost identical to a cat image won't be identified by us as cats at all. Other that look like a random blur will be.
  • Top-down was revived in the 2000s in the form of Bayesian generative models. These combine generative models and hypotheses testing with probability theory. A Bayesian model lets you calculate just how likely it is that a particular hypotheses is true, given the data. And by making small but systematic tweaks to the models we already have, and testing them against the data, we can sometimes make new concepts and models from old ones.
  • How do you decide what hypotheses are worth testing in the first place?
  • In bottom-up, you don't need much knowledge, but a lot of data, and you can only generalize in a limited way. In top-down, you can have just a few examples and make much broader and more varied generalizations, but you need to build much more into it to begin with. Now we are trying to combine deep learning and Bayesian inference.
  • Four-year-olds can learn by taking just one or to examples, as a top-down system does, and generalizing to very different concepts. But they can also learn new concepts and models from the data itself, as a bottom-up system does.
  • While recent remarkable AI advances take place in narrow and well-defined spaces of hypotheses and concepts, like games and images, children and scientists sometimes change their concepts in radical ways, performing paradigm shifts rather than simply tweaking the concepts they already have.
  • Pre-schoolers are better at coming up with unlikely hypotheses than older children and adults. We have almost no idea how this kind of creative learning and innovation is possible.
  • Children are active learners; they don't just passively soak up data like AIs do. They are intrinsically motivated to extract information from the world around them through their endless play and exploration. And this exploration is more systematic than it looks and is well adapted to find persuasive evidence to support hypothesis formation and theory choice. Building curiosity into machines and allowing them to actively interact with the world might be a route to more realistic and wide-ranging learning.
  • Even pre-schoolers learn through imitation and by listening to the testimony of others. But they don't simply passively obey their teachers. They take in information from others in a remarkably subtle and sensitive way, making complex inferences about where the information comes from and how trustworthy it is and systematically integrating their own experiences with what they are hearing.

Peter Galison: Algorists Dream of Objectivity

  • Algorist - Someone profoundly suspicious of the intervention of human judgement.
  • In many studies across domains such as sentencing to psychiatry, predictions using actuarial tables, a multiple-regression equation, or an algorithmic judgement equaled or exceeded in accuracy those using the subjective approach.
  • Scientific objectivity has a history, starting after the first third of the 19th C. It came to mean that our representations of things were executed by holding back from intervention. But then, subjective factors have to be part of the scientific work needed to create, classify, and interpret scientific images.

George M Church: The Rights of Machines

  • The main reason for the 6 tn joules of energy required to get a productive human mind is the 20 years required for training.

Caroline A Jones: The Artistic Use of Cybernetic Beings

  • Cybernetics the word claimed postwar newness for concepts that were easily 400 yrs old, notions of feedback, machine damping, biological homeostasis, logical calculation, and systems thinking that had been around since the Enlightenment (boosted by the Industrial Revolution), and coming from Descartes, Leibniz, Carnot, Clausius, Maxwell, and Watt.
  • Wiener: "We have modified our environment so radically that we must now modify ourselves in order to exist."
  • In the 60s and 70s artists were bricoleurs of electronic bodies, interested in actions rather than calculation or cognition.
  • Some sculptures stimulated (and simulated) an affective rather than a rational interaction. Humans felt they were encountering behaviors indicative of responsive life.
  • What interested artists were machinic motions evoking drives, instincts, and affects; they mimicked sexual and animal behaviors, as if below the threshold of consciousness.
  • Consciousness is not just "in here" but streaming in and out, harmonizing those sensory, scintillating signals. Mind happens well outside the limits of the cranium.
  • Bateson: Mind is material "not necessarily defined by a boundary such as an envelope of skin."
  • Right cybernetics vs left cybernetics.
  • Bateson's God is a placeholder for our evanescent experience of interacting consciousness-in-the-world: larger mind as a result of inputs and actions that then become inputs for other actions in concert with other entities - webs of symbiotic relationships that form patterns we need urgently to sense and harmonize with.

Stephen Wolfram: Artificial Intelligence and the Future of Civilization

  • I see technology as taking human goals and making them automatically executable by machines:
    • Previously we wanted to move objects from here to there.
    • Now the work is mental rather than physical.
  • A brain receives certain input, it computes things, it causes certain actions to happen, it generates a certain output. Like the weather. All sorts of systems are, effectively, doing computations - whether it's a brain, or a cloud responding to its thermal environment.
  • What makes us different is the particulars of our history, which give us our notions of purpose and goals. When the box on our desk thinks as well as the human brain does, what it still won't have, intrinsically, are goals and purposes. Those are defined by our particulars - biology, psychology, and cultural history.
  • Today we've got all kinds of purposes. 1000 yrs ago it was do I have food and am I safe. Now, many of us barely think about those purposes.
  • There isn't a bright line between what is intelligent and what is simply computational.
  • You can take a large collection of the world's knowledge and automatically answer questions on the basis of it, using what are essentially merely computational techniques. This is much more analogous to what biology does in evolution.
  • My approach was to make a language that panders not to the computers but to the humans, to take whatever a human thinks of and convert it into some form that the computer can understand.
  • There are maybe 5000 picturable common nouns in English, 10,000 if you include things like special kinds of plants and beetles that people would recognize with some frequency. We trained our system on 30 m images of these kind of things. It's a big, complicated, messy neural network, which takes about a quadrillon GPU operations to do the training.
  • It's impressive because it pretty much matches what humans can do. It has about the same training data that human have - about the same number of images a human infant would see in the first couple of years of its life. roughly the same number of operations have to be done in the learning process, using about the same number of neurons in at least the first 3 levels of our visual cortex.
  • In the late 1600s, Leibniz, Wilkins and others were concerned with what they called philosophical languages - complete, universal, symbolic representations of things in the world.
  • How has knowledge been transmitted:
    • Genetics
    • Physiological recognition
    • Natural language
    • Knowledge-based programming, which is immediately executable, with the knowledge of the world knitted into the language you're using to write code.
  • There's computation all over the universe; in a turbulent fluid producing some complicated pattern of flow, in the celestial mechanics of planetary interactions, in brains. But does it have a purpose? Does the weather have a goal? Does climate have a goal?
  • I don't think there is abstract purpose per se or abstract meaning. Does the universe have a purpose? Then you're doing theology in some way. Purpose is something that comes only from history.
  • There are computational processes that you can go through in which there is no way to shortcut that process. Much of science has been about shortcutting computation done by nature. If we're doing celestial mechanics and want to predict where the planets will be in a million years, we follow the equations step-by-step to reduce the computation. We can be smarter than the universe and predict the endpoint without going through all the steps, but some details are irreducible. That's why history means something. If we could get to the endpoint without going through the steps, history would be, in some sense, pointless.
  • How do you determine whether abstract AI has a purpose?
  • What's the difference between a rock and a box of a trillion downloaded souls? The details of what's happening in the box were derived from the long history of human civilization, whereas the rock has its long geological history but not the particular history of our civilization.
  • Realizing that there isn't a genuine distinction between intelligence and mere computation leads you to imagine that future - the endpoint of our civilization as a box of a trillion souls, each of them essentially playing a video game forever. What is the "purpose" of that?