________________________ SYMBOL-PUSHING SPIDERS Dylan Holmes ________________________ 2022/Feb/24 Our AI programs have learned to write. I remember the first time I saw this opinion piece [1] in the Guardian, dictated by the language-processing program GPT-3. It's impressive---and more than a little unnerving. You have to wonder: Just how smart /are/ these programs? How much is gimmick and how much is technological breakthrough? True human-level AI poses an extreme existential risk---how close are we now? After all, here we have a computer program that writes essays. That's famously hard work; it takes training and inspiration and logical flow. More than that, the act of writing itself feels intimately human. It expresses inner life. With due respect to the hand signs of chimps and the verbal stunts of parakeets, we are the only symbolic species. Or at least, we used to be. Thinking about this new technology, I get a sense of how it felt to see the reigning chess champion Gary Kasparov lose to IBM's Deep Blue program in 1996. It must have felt like the beginning of the end for humanity. Nowadays, of course, when you can download a grandmaster-level chess program on your smartphone or write your own as a college homework assignment, it's easy to downplay the experience. But strategic thinking was supposed to be humanity's birthright, and chess was a game for the true masters. If a computer program could think strategically, how many other human-like skills (and vices!) were within easy reach? Think about it: if you ever met an alien who could beat the best human players at chess, you'd naturally wonder what else it could do. You'd be right to wonder. Natural intelligences---including all life on earth, as well as these putative aliens---are holistic. Their capabilities come in flocks. In a natural organism, the functions for metabolism, reproduction, vision, etc. all developed together and work together in an intermeshed system for survival. Not one of these functions ever existed in isolation. Even though we sometimes describe intelligence as made of separate modules, reality isn't always so neat. Holism is an important idea because it implies that natural intelligences never do just one thing well. If you see an unfamiliar animal sidle up to a watering hole and drink, you can safely assume that it can also do many other related things, such as identify threats and disturbances, navigate difficult terrain, feed itself, and reproduce. If you decide to approach it, it may flee---or attack. In contrast, artificial intelligences can be hyperspecialized. While a chess-playing alien can presumably do other things besides play chess, some chess-playing programs---as we discovered with Deep Blue---really can't. In general, artificial skills can be stripped of biological bloat in a way that no natural organism can: vision that sees only human faces, or route planning that has no body. Such isolated components of intelligence don't occur in nature; we've only just started building them [2]. As such, we tend to make a specific kind of mistake, which is that we see a machine with one highly specialized ability and assume it has the retinue of related skills that, in nature, would accompany it [3]. The problem isn't that we're gullible; animal intelligence is often the only reference we've got. We are especially likely to be misled when /human/ intelligence is the only reference we've got. Humans are the only ones that play chess, solve calculus problems, and write image captions, for example. We have an idea of what each skill takes and how hard it is for us. We sympathize with anything that can do something similar. When you meet a person who can solve calculus problems, you can safely assume they've got other human capabilities: they can also read a book, count the number of desks in a room, name six colors, explain to someone else how to solve a calculus problem, get frustrated at a tough integral, and maybe write with a pencil. What can you assume about a corresponding /program/ that can solve calculus problems? This brings us back to GPT-3 and the problem of judging programs that use language. Challenging our assumptions about breadth gives us a new tool for analyzing risk. On one hand, a given essay-writing program might use the same broad-spectrum capabilities /we/ use to write: social intelligence, logical thought, mental imagery, common sense, and the rest. After all, this is the way writing works "in the wild". In this view, a program like GPT-3 is exceptionally dangerous because, in mastering language manipulation, it has mastered most of the components of human cognition---apocalypse is a matter of connecting it to the wider world and leaving it switched on. On the other hand, we've seen that holistic animal-like intelligence is not the only possibility. Another possibility is that the program is actually a narrow specialist---what I call a /symbol-pushing spider/. A symbol-pushing spider is a very peculiar beast, never before seen on this planet. It arranges essays out of words the way spiders spin webs out of filament. In other words, it is a verbal insect---an insect-level brain adapted to manipulating language. (Analogously, Deep Blue might be called a "chess-playing spider".) Despite its meager equipment, this simple automaton is able to replicate the texture of human-created essays---including cohesive vocabulary and coherent lines of argument---because our written works reveal which words collectively create such a texture. Symbol-pushing spiders give us a way to challenge our intuition that verbal mastery is akin to general intelligence, that a program which /uses/ words necessarily has the same rich social capacity that led humans to /need/ words in the first place. The general principle is that the bare components of intelligence---like the physicists' subatomic smithereens---never occur separately in nature (as in /just/ face recognition, /just/ route planning, /just/ neural adaptation, etc.), and are consequently likely to fool us about the range of what they can do. Being careful about this fact leads us to the right questions, including: 1. Is GPT-3 a symbol-pushing spider? That is, is its impressive performance more like a general human intelligence or more like a symbol-specialized insect? 2. How would you decide? That is, how would you challenge the hypothesis that GPT-3 has a bare language capability, with no other human capabilities within reach? 3. Even supposing GPT-3 is "merely" a hyperspecialized language module, what risks do such specialists pose? For example, how easily could a language module, in combination with other similarly-specialized modules, produce a dangerous degree of general intelligence? There is much we still don't know about isolated components of intelligence. When Deep Blue beat Kasparov at chess, we wanted to know whether its strategic mastery entailed other human-like capabilities. Now we want to know the same about GPT-3's grasp of language. We want to know whether language can be split off from general intelligence. And we want to know just how dangerous language modules can become in combination. My goal has been to indicate what I think are the important questions. I am not claiming to know whether GPT-3 (or any other particular program) is within close reach of human capabilities, even if I am pointing out one way it might not be. And while I point out how artificial capabilities /can/ be artificially narrow, I don't assume they always are---in fact, I am acutely concerned about the prospect of machines with (super)human intelligence. I want an accurate sense of risk. Finally, I believe there are serious risks even with narrow intelligences. Verbal insects represent a new possibility from the vast space of possible intelligences, in which natural intelligence is just one small niche. We must understand these possibilities if we're going to manage existential risk effectively. Even overestimates are dangerous---we've got to know just how much further we have to go. I think of how humans, inspired by birds, developed the airplane. We replicated the abstract capacity for flight---mastered it even, going longer distances and carrying heavier loads than birds ever did. Still, we have yet to build a plane that replicates the rest of the bird: feathers, flapping, nest-building, egg laying, and the rest. An airplane doesn't have to unfold from a jumble of DNA in a single cell. Then again, the airplanes we've got are powerful and destructive instruments anyways. Footnotes _________ [1] [2] As I see it, artificial intelligence often differs from natural intelligence the way a prosthetic limb differs from an original limb: they serve a similar functional role, but are built along different principles. They are our first prototypes, which may be superhuman in some respects and brittle in others. To be sure, I'm not a nature chauvinist: I think we can build artificial intelligences that meet or exceed animal self-sufficiency, just like I think we could eventually build prosthetics complete with blood vessels, nerves, and the rest if we chose to. [3] Like the proverbial towel in the Hitchhiker's Guide to the Galaxy: "[A] towel has immense psychological value. For some reason, if a strag discovers that a hitchhiker has his towel with him, he will automatically assume that he is also in possession of a toothbrush, washcloth, soap, tin of biscuits, flask, compass, map, ball of string, gnat spray, wet-weather gear, space suit etc., etc. Furthermore, the strag will then happily lend the hitchhiker any of these or a dozen other items that the hitchhiker might accidentally have 'lost.'"