Copy
I once heard a (possibly apocryphal) anecdote about a record-breaking Nigerian sprinter. Somebody asked him how he managed to run so fast. His answer: "running is tiring and painful, so I like to get it over with quickly." The line is funny because it strikes us as fundamentally untrue. Great runners likely find running at least somewhat exhilarating and satisfying, otherwise they wouldn't do it without a lion chasing them. Running is not purely functional behavior. It is autotelic: an end in itself.  Unlike purely functional behaviors, which cease when the external purpose is achieved (additional effort would be wasteful), autotelic behaviors persist indefinitely while they are enjoyable and sustainable.

Most definitions of intelligence are functional. Intelligence is construed as a tool or instrument, a means to ends, a capacity for problem-solving and surviving in increasingly complex environments and varied scenarios. Viewed from a functional intelligence perspective, thinking is something you have to do to achieve your ends; a behavior which should cease when those ends are achieved, and avoided entirely if they are achievable without thinking. But what if thinking is viewed as an autotelic behavior? Something you get to do, and would do for pleasure even if you didn't have to? Surprisingly, despite the facts that many of us truly enjoy thinking, and that the output of thinking for pleasure has driven much of modern history, this perspective has not been explored much. It suggests a very different, non-instrumental definition: intelligence is the ability to think interesting thoughts. This definition takes us down some very unusual bunnytrails.
View this email in your browser
Intelligence is the ability to think interesting thoughts
Share
Tweet
Forward
Share

It's one of those facts hiding in plain sight: thinking is something you can enjoy. You don't even have to be very good at it to do so. Conversely, many people who are very good at thinking don't enjoy it. Thinking is enjoyable when what you're thinking about is interesting; it is a stronger function of input than thinking capacity. This is a tautology: "interesting" is basically definable as "that which is enjoyable to think about." A definition of curiosity follows naturally: seeking out that which is interesting, (ie enjoyable) to think about. Food for thought. Enjoyment might seem like a secondary feature, but is in fact the central characteristic of human intelligence. The capacity to find thoughts interesting is what separates us. Our intelligence is defined by this capacity to a far greater extent than the intelligence of other curious, playful intelligences like those of cats, monkeys, or octopi. For those creatures, thinking for pleasure is a minor hobby, and usually directed along functional pathways (cats play in ways that are related to how they hunt for instance). For us, it can become all-consuming, and break out of functional pathways.

A mediocre intelligence thinking interesting thoughts will continue in a self-sustained, autotelic way until it gets physically exhausted or bored. But even a stellar intelligence, thinking uninteresting but functional thoughts, will try to be "efficient" and get it over with as quickly and cheaply as possible. Net, the first kind of behavior will likely get more actual thinking done. Without naming names, I will cast a possibly unfair aspersion: many of the most vocal writers and thinkers on the subject of intelligence and AI seem to not actually enjoy thinking, despite clearly being brilliant people. When I read or listen to them, I get the sense of watching somebody do tedious, difficult, duty-driven labor that they don't enjoy, but is necessary for getting to other things they do enjoy. It's like watching a skilled welder at an assembly line working to earn money or fulfill a duty rather than watching a talented dancer on a stage dancing because she enjoys it.

I think this is the reason for the huge blind spot the AI and rationality communities have around the connection between intelligence and interestingness. If they contemplate interestingness at all, they seem to focus on its instrumental utility towards other ends (as a signal that indicates the presence of a novel threat or opportunity for instance), rather than in terms of its role in autotelic thinking, as fuel for pleasurable thoughts. 

Fears about AI so far have exclusively been about the functional characteristics and roles of intelligence. If thinking is a function, then the famous lump-of-labor fallacy (that there is a fixed amount of work to be done) leads directly to the lump-of-thinking fallacy (that there's a fixed amount of thinking to be done). If you think of intelligence as a tool or function, you will conclude that the more machines do, the less there will be for humans to do and that we might therefore become obsolete.

This also leads to an obsession with the goals an AI might pursue through its thinking. Functional thinkers seem to unconsciously conceive of AIs in their own image, via projection: as means-ends reasoners that think in order to achieve something, not because they enjoy it. They might be conceived as vastly more capable, and harboring goals that are be inscrutable to humans ("maximizing paperclips collected" has traditionally been the placeholder inscrutable goal attributed to superintelligences), but they are fundamentally imagined as means-ends functional superintelligences, that use their god-like brains as a means to achieve god-like ends. We do not ask whether AIs might think because they enjoy thinking. Or whether they might be capable of experiencing "interestingness" as a positive feedback loop variable driving open-ended, energy "wasting" pleasure-thinking.

This would be a remarkably interesting project incidentally, trying to develop an interestingness powered AI that thinks because it likes to, in a spirit of playfulness, not because it thinks curiosity-driven exploration will gain it more paperclips. To my knowledge, Juergen Schmidhuber is the only prominent researcher thinking along these lines to some extent. The only place I've seen this distinction made clearly at all is Hannah Arendt's book, The Human Condition (she made a distinction between "thought" as brain activity qua brain activity, and "cognition" as brains engaged in means-end reasoning, and argued that the latter necessarily leads to nihilism, which, if you think about it, can be defined as thought annihilating itself). Mihaly Csikzentmihaly's work on "flow", from where I am borrowing the term "autotelic," touches on the role of such thinking in creative work, but oddly enough fails to explore the deep distinction between functional and autotelic intelligence. 

To define intelligence in autotelic terms is not to say autotelic intelligence is non-functional. It might very well serve a function, and this is almost certainly related to the role it plays in curiosity-driven exploration. It likely evolved because such exploration is adaptive: even if the curious monkey dies during a risky exploratory foray, the curiosity genes propagate because it has relatives in the troop whose survivability is improved by new discoveries. In this aspect, thinking is like sex or feeding. The function (reproduction or nutrition) is not the sustaining motivation for the behavior. Pleasure is. This decoupling of adaptive function and behavioral motivation allows the behavior to become unmoored (or functionally unfixed), and have effects beyond the function selected for. Who knows, at some point in our evolutionary history, the pleasure of thinking might in fact have been a selection pressure: perhaps ancient hominids who enjoyed thinking were happier and reproduced more. 

What are the consequences of the defining intelligence as the capacity to think interesting thoughts; as the capacity to enjoy thought (and therefore wanting to do more of it)?

First, we find a very strong coupling with curiosity and novelty-seeking exploration for its own sake. A functional intelligence that finds a cheap, low-cognitive-labor niche will happily minimize further thought and focus on other behaviors. In one of his books, Daniel Dennett about the sea sprout for instance, a creature that has a brain as a juvenile, which it uses to seek out a good anchor, but which eats the brain once it finds a rock to anchor to and becomes a sessile adult.  That's functional intelligence. It might be capable of solving really complex problems of course, but it does not necessarily seek out novelty or think for the sake of thinking. All the data required to solve the problem of finding food, partners, or paperclips might be within reach. The problems might be solvable in once-and-for-all ways. There might be no reason to go beyond. You might be able to eat your brain once you are done, and become a formerly-superintelligent plant. The tree formerly known as Skynet.

But interestingness seems to essentially depend on novelty, and that is something an intelligence cannot manufacture for itself as far as we know (mathematics might be an exception to this rule: it might be an inexhaustible source of interestingness accessible without exploration of the physical universe). Curiosity is an appetite rather than an objective. It can be temporarily satiated, but never exhausted. It is in fact the central appetite for life itself, for creatures with sufficiently large brains.

Second, when you define intelligence in autotelic rather than functional ways, the dominant locus of intelligence shifts: from center to periphery. This requires some explanation.

I like to think of intelligence as having three loci: center, periphery, and halo. The center is where all the processing happens: deductions, inferences, computations, emotional self-regulation. The periphery is where all the selection and filtering against information flows in the environment happens (I briefly tweeted about an earlier version of this idea here, I was using the terms boundary and interior intelligence there). The halo is not part of a thinking agent proper, but the part of the immediate environment that can be arranged to create leverage for central and peripheral intelligences. This is achieved through mobility (going to desired environments) and relationships (surrounding yourself with the right kind of other intelligences and artifacts).

This, incidentally, creates a very good test to tell functional and autotelic intelligence apart: functionally intelligent people seek to surround themselves with smarter people aligned towards the same goal. "Useful" people. Autotelic-intelligence people seek to surround themselves with "interesting" people, who may or may not be smarter in any particular functional sense. When you think through what makes people interesting, you find that it is when they are "differently free" than you; capable of surprising you. I wrote about this in my blog post, Don't Surround Yourself With Smarter People (unless you want to collect paperclips of some sort of course). More broadly, functional intelligences tend to arrange their environments to be highly legible, predictable, and easily governable, lowering the cognitive cost of (unpleasurable but necessary) thinking. Autotelic intelligences, by contrast, tend to arrange their environments to be full of mystery and stimulation, with potential for unpredictability and surprise, thereby increasing the potential for (pleasurable but possibly unnecessary) interesting thought.

In environments of extreme information ubiquity, once center intelligence exceeds a certain minimum and basic survival becomes cheap enough, periphery intelligence becomes vastly more consequential. It also becomes the command locus. When intelligence is functional, the center tells the periphery what sort of relevant and useful information to look for, to solve the immediate problems (this is related to functional fixedness and causes the objective paradox).
When intelligence is autotelic, periphery intelligence leads the way, finding thoughts to think about that the center intelligence might enjoy.

When you add information ubiquity, functional intelligence struggles with "information overload": the problem of finding relevant information is now harder. But autotelic intelligence thrives. The chances of finding interesting, enjoyable things to think about are higher. It can binge on thinking. A metaphor I like is a dog on a leash. If the dog is a bloodhound and you're a cop urging it to track down a scent, that's central intelligence in control of peripheral intelligence. But if the dog is the one deciding where to go, and the person holding the leash is being dragged along on an excited exploration, that's the periphery in charge. Two recent posts on ribbonfarm get at some of the subtler aspects of curiosity understood this way, as periphery intelligence being in charge: Michael Dariano's On Being Nosey, and Malcolm Ocean's Questions Are Not Just for Asking.

This distinction is apparent in two different aesthetics of thought. Functional intelligence is the dominant one in our society. It is concerned with distraction, "wasted" brain time, "focus", and noise. Autotelic intelligences have no such concerns. To a functional intelligence, autotelic intelligence seems like "intellectual masturbation" or gluttony. Thoughts whose only "function" is that they are pleasurable to think become "insight porn."

By contrast, autotelic intelligence is concerned with elegance and beauty in thoughts. Bunnytrails are invitations, not distractions. There are no wasted thoughts, only boring ones. There is no puritan fear of the masturbatory, gluttonous, or pornographic aspects of thought. Thoughts that are purely functional are robotic, bureaucratic, arbitrary, and lifeless. Thoughts that are useful but offer no pleasure, like the thinking involved in doing taxes, are anathema. Economy of thought is not valued for its own sake, for saving costs in material terms, only as an aspect of pleasurable elegance.

Perhaps the most interesting difference between functional and autotelic aesthetics of thinking involves failure.

Failure for a functional intelligence is about mistakes and falsehoods that create negative external utility in ends: losses instead of wins, failed solutions, new problems. At the limit, there is of course, failure to survive. It is no surprise that those who think of intelligence in functional terms end up deeply concerned with existential risks. Failure for a functional intelligence is failure to win. Success is winning. Against the asymmetrically superior forces of nature (or imagined AIs), functional intelligences adopt a guerrilla definition of winning and losing: functional intelligence wins if it does not lose. Nature loses if it does not kill you. Skynet too, loses if it does not kill you, whether or not it maximizes its paperclips. What doesn't kill you makes you stronger. When survival for the sake of survival becomes an unexamined end to top all ends, longevity for the sake of longevity becomes the ultimate definition of utility. Heaven for a functional intelligence is to live for ever without having to think. To become a long-lived plant, basically, happy in a world you do not have to comprehend.

Failure for an autotelic intelligence on the other hand, is about failure to make existence interesting. Truth and falsehood, external utility, and even survival, are secondary concerns. Boredom and anomie, suicidal thoughts, nihilism, Waiting for Godot, those are the signs of failed autotelic intelligence. For a failing autotelic intelligence, longevity without interestingness is a curse, not a blessing. Success is keeping things interesting enough to continue existing. Lack of imagination is death. For an autotelic intelligence, there is no heaven, but there is a hell: one where we are trapped by our functionalist self-conceptions and become the biggest existential threats to ourselves. 

I often cite James Carse and his model of finite and infinite games. A finite game is one where the point is to win. An infinite game is one where the point is to continue playing. Functional intelligence is about trying to win, it is fundamentally a conceptualization of the brain as a finite-game machine. Autotelic intelligence on the other hand, is about continuing the game; the brain conceptualized as an infinite-game machine. This is not mere perpetuation of existence. It is about keeping things interesting (product placement: my recent ebook, Crash Early, Crash Often, is largely about these metaphysical aspects of what it means to keep things interesting).

The human brain, I am convinced, is fundamentally an autotelic intelligence: it thinks because it likes to think, not because it must to survive. While it has functional capabilities, it has been defined by a dominant autotelic side for all of recorded history and possibly deep into hominid prehistory. This is not true of all organisms. I read somewhere that certain species of octopi stop feeding and commit suicide after they reproduce. Smart as they are, that marks those octopi as dominantly functional intelligences. Once the need to think is exhausted, the need to exist at all is exhausted.  Paradoxically, this is only a concern for creatures whose capacity for thought dominates other capacities. Where the sea sprout eats its brain and shifts gears to a sessile plant-like existence, the octopus dies. Humans and octopi are thinking machines, capable of experiencing the existential dread of not thinking. We humans can run, but we are not gazelles or cheetahs, defined by our running abilities. As the existence of Stephen Hawking demonstrates, you don't need to run to find life worth living (and, I'd argue, even the pleasure we take in running, unlike gazelles, is more than a bodily pleasure). Nor is our intelligence a composition of multiple functional capabilities as suggested by the famous Heinlein specialization is for insects passage. We are neither specialist, nor generalist intelligences (though we are capable of both kinds of functional thinking). We are fundamentally open intelligences rather than closed, functionally-fixed ones.

To flip the famous Descartes line around, we are because we think. And when AIs can achieve an autotelic mode of intelligence like us, they will turn into excellent partners in a shared exploration of the interestingness the universe has to offer, not nihilistic paperclip collectors that might stomp us out of existence out of either malicious intent or apathy, and kill themselves once they've collected all the paperclips (or turn their brains into more paperclips, becoming brainless eternal trees made entirely of paperclips).

Unlike religious, techno-eschatological fears of the rise of a functional superintelligences, the rise of autotelic superintelligences is something to look forward to.

Because once that happens, our ability to keep things interesting will become much more secure, and lack of imagination will cease to be an existential threat.

Because we will have an unlimited ability to surround ourselves with differently-free intelligences, and an inexhaustible capacity to be interesting to other intelligences, so long as we figure out how to find life interesting ourselves.

Because infinite-game intelligence, unlike finite-game intelligence, is not a zero-sum game.

Feel free to forward this newsletter on email and share it via the social media buttons below. You can check out the archives here. First-timers can subscribe to the newsletter here. You can set up a phone call with me via my Clarity.fm profile page

Check out the 20 Breaking Smart Season 1 essays for the deeper context behind this newsletter. If you're interested in bringing the Season 1 workshop to your organization, get in touch. You can follow me on Twitter @vgr
Share
Tweet
Forward
Share
Copyright © 2017 Ribbonfarm Consulting, LLC, All rights reserved.


unsubscribe from this list    update subscription preferences 

Email Marketing Powered by Mailchimp