Copy

I’ve been saying lately I want to turn myself into a bot. What do I mean by that? The best answer is the character of the Electric Monk in Douglas Adams’ novel, Dirk Gently’s Holistic Detective Agency. The Electric Monk is designed to believe things for you. This is not just hilarious, it is in fact a better statement of the problem of creating artificial humans than "Artificial Intelligence". You could say that AI is based on the idea that the essence of being human is intelligence: cognitive prowess. I disagree. I think the essence of being human is the capacity for belief. Traditional AIs can hold explicit beliefs and reason with them, but they are only just learning to believe beliefs. Getting computers and robots to believe things for you is not as difficult as getting them to think for you. In fact, it’s almost a solved problem. The reason it took so long, is we didn’t recognize the importance of the problem of artificial belief, or the consequences of it being solved.

Here’s the thing: when you can get computers to believe for you, your own thinking can get a lot more agile, burdened with a lot less belief inertia. In Boydian terminology, outsourcing belief work leads to faster transients. This has a LOT of profound consequences.

View this email in your browser
In this picture, I'm the bird. My electric monk, BeliefBot 3000, does all my believing work.
Share
Tweet
Forward
Share

1/ We talk a lot about transaction costs and Coasean economics in organizations, yet we largely ignore these phenomena as they apply to our the organization of our brain.

2/ The central transaction cost in human cognition is context-switching cost. This comes in two varieties: voluntary and forced.

3/ Voluntary context switching is when you swap out one set of mental models (forming an orientation) for another, such as between a client sales call and play-time with your kid.

4/ Forced context switching happens when unexpected events or the actions of an adversary undermine the assumptions of your current mental models and force you to hastily improvise new ones.

5/ The former is the problem usually misframed as multi-tasking. The latter is what Boydians call reorientation. This is generally much harder, since it is likely you’ll need at least some improvisation.

6/ In both cases, the measure of the challenge is how long it takes you to switch, or the length of the transient. Ideally, you want fast transients, but without sacrificing orientation quality.

7/ How fast your transients can be depends on three things: the range of alternative orientations that you can call up, the cost of the switch itself, and the degree of improvisation. 

8/  John Boyd, the originator of the concept, stumbled upon it while thinking about why American F-86 Sabres had such a high (10:1) kill-ratio against MIG 15s in the Korean war. 

9/ This was despite the planes being roughly evenly matched by Boyd’s own theory (known as energy-maneuverability) in terms of basic flight-physics level capabilities.

10/ Boyd hypothesized the difference wasn’t due to Americans being better pilots, but because the Sabre had hydraulic flight controls allowing for faster reorientations.

11/ The moral of the story: difference in context-switching systems can create an order of magnitude performance difference between otherwise similar sets of base capabilities.

12/ The lesson from the origin story of fast transients is that when machines do a lot of the work for you, you are only as good as your context-switching capabilities.

13/ In this example, make a note of the difference between the planes in two areas: beliefs about performance requirements in different flight regimes, and beliefs about the pilot.

14/ Both planes embodied similar mental models about performance requirements (for example turn and acceleration capabilities) under different conditions. 

15/ But they embodied different beliefs about the pilot. Hydraulic controls are easier to handle than mechanical, allowing faster reorientations. Hence fast transients.

16/ But wait, there’s more. By having to devote less attention to the more basic behavior of struggling with controls, the pilot could pay more attention to other factors.

17/ The transients weren’t just faster, they were better. If both pilots had (say) 10 different tactical orientations to choose from, the F-86 pilot would choose better maneuvering options.

18/ Faster, more accurate play (choosing the best reorientations, faster) forces the tempo and throws the adversary into increasing FUD, and a spiral of more unforced errors. 

19/ You could say the F-86 Sabre was a better artificial believing system (ABS) than the MIG 15. It embodied beliefs about control systems in hydraulic form that the pilot didn’t have to.

20/ Fun-fact: the more familiar kind of ABS, anti-lock braking systems, are in fact artificial belief systems as well. I'll leave you to figure out how as homework. 

21/ ABSes are different from straightforward automation or AI, and are not about effective UI/UX design as some mistakenly assume (though effective UI/UX is a contributing factor)

22/ Let’s generalize the F-86/MIG-15 example and level it up to a brave new future of electric monks and how they might allow us to turn ourselves into better context-switching bots.

23/ To speed up your transients, you can do three things, and technology has a role to play in each of them: a) maintain richer orientation libraries, b) switch faster, c) believe fewer things.

24/ Having more orientations and mental models available (more fox, less hedgehog), means you can deal with more familiar situations, and need less improvisation in unfamiliar ones.

25/ Switching faster is about simply improving the speed and accuracy with which you can get out of one mental state and into another. Things like meditation can help raw speed.

26/ But both these well-known mechanisms, I argue, are highly limited. The don’t deliver orders of magnitude improvement in human performance.

27/ Having more orientations available requires a lot more training time invested up-front, and beyond a point, the cost of searching through your orientations library becomes prohibitive.

28/ Mythic bullet-dodging in martial arts movies aside, there is only so far you can get through things like meditation. Maybe you take 5 minutes instead of 15 between meetings.

29/ To get to orders of magnitude improvement in transients, you need to focus on believing fewer things. Which means your tech support systems have to believe more.

30/ The Matrix kinda nailed the idea way back in 1999. While in the matrix, characters could download arbitrary orientations for things like kung-fu and helicopter piloting.

31/ The cognitive prosthetics (presumably bits of code attached to the characters' immersed in-matrix personas) did most of the believing necessary to kick ass or fly helicopters.

32/ The human brains only had to do the fast context switching and pick the right orientations with which to navigate situations. This is in fact a pretty realistic account of assisted cognition.

33/ We use this mechanism in video games all the time. Human players do all the executive decision-making, the game characters execute kung-fu movies the players never could.

34/ Note that another kind of plot element in The Matrix, the bullet-dodging and gravity-defying leaps, represent a different idea: that you can use out-of-game knowledge to hack the game.

35/ Sadly, the latter kind of out-of-universe hack is not available to us. Despite what some wooful types and simulationists believe, you can’t vipassana or tai-chi your way to bullet dodging level of time-management.

36/ Still, the first kind — outsourcing the job of believing things — is available to us. And this is what I mean by turning myself into a bot. 

37/ The idea of a bot is in some ways the opposite of the idea of a traditional AI. The archetype of a bot is tiny, dumb artificial agent that does one thing for you on Twitter.

38/ My favorite Twitter bot, for example, is one called infinite scream. It merely tweets variants of AAAAAHH! periodically. It believes “the world is going to hell” for me.

39/ I am not even kidding. Seeing AAAAAHHH! periodically pop up in my feed and RTing it helps keep my OMIGOD APOCALYPSE orientation alive and healthy.

40/ Infinite scream is an example of a very (very) simple electric monk. It usefully embodies part of the mental models of an orientation so I don’t have to. It believes things for me.

41/ This is very different from automation. Automation is the process of depersonalizing behaviors by capturing them as abstract, explicit procedural knowledge (how-to knowledge).

42/ Unlike automation, artificial belief systems retain the uniquely personal subjective posture, tacit knowledge, and context inherent in believing and acting from belief.

43/ An ABS creates a technologically extended society of mind: Your electric monk is a swarm belief system attached to your biological brain as a cloud of tacit belief energy.

44/ An ABS captures not just what I believe. It captures the way I believe it, and how I am pre-disposed to act from within a particular state of active belief. 

45/ Unlike proceduralized intelligence where you have to create explicit how-to knowledge, an ABS creates a more flexible potentiality for action through artificial belief fields, so to speak.

46/ To take a more serious example, take the now-extinct creature: the blackberry-driven execubot of the late 90s/early 00’s, before the iPhone changed that game.

47/ The blackberry execubot came in two varieties. Type 1 used the blackberry in conjunction with a human admin assistant to turn into a meeting machine. Type 2 turned into morons.

48/ The 2-element support system comprised automation and artificial belief. The blackberry automated the impersonal stuff, like calendar management and venues.

49/ The personal assistant of the late 90s was an extension of the executive’s beliefs about priorities and context. Since the blackberry could do so little, the admin assistant did a lot.

50/ This included, besides managing the calendar, things like assembling the right context-documents in front of the executive at the right time in the right place.

51/ Executives would arrive at one meeting just minutes after the previous one, and the right emails and powerpoints would be ready for them. Opinions/decisions already half-formed.

52/ That wasn’t the blackberry. That was the human mechanical-turk electric monk, the admin assistant. By believing the right things on behalf of the executive, they could super-power them.

53/ Back then, automation without artificial belief support was terrible. Executives with top-of-the-line blackberries but poor administrative support were problem-multiplying disasters. 

54/ Today, 10 years since the birth of the iPhone and the death of the Blackberry, executives behave in subtly different ways, and all of us have become executives of our own lives.

55/ In the first few years, the iPhone made things worse because it was so much more capable than the Blackberry. Some of the belief-work done by admins fell through the cracks as it was being reeled back in.

56/ Now we’re recovering. It is no accident that the great power of the iPhone over the Blackberry required the invention of the Siris, Alexas, and Cortanas of the world.

57/ It is not entirely an accident of sexist assumption engineering that these early electric monk belief bots are modeled as disembodied female voices.

58/ Alan Turing once said, "No, I'm not interested in developing a powerful brain. All I'm after is just a mediocre brain, something like the President of AT&T."

59/ That was the original reference for the Turing Test. From that reference point, AIs evolved that could pass for human in broader and deeper contexts than being CEO-execubot. 

60/ The functional reference for the ABS is the human female administrative assistant of the Blackberry era. That's the starting point for the development of full-blown Electric Monks. 

61/ And no, it isn't an accident that the first reference for AI was a CEO, and the first for an ABS was an admin assistant. They are the human duals on the two sides of Moravec's paradox.

62/ Restated for humans in business, AI research initially believed the hard job was the CEOs. ABSes embody a recognition that perhaps the harder job is the admin assistant's. 

63/ Whether the CEO's job or the admin assistant's job is fundamentally harder is an imponderable for the ages. I have no strongly held beliefs on the matter :P. 

64/ But the point here is that both are useful and complementary reference points for designing artificial versions of human capabilities such as believing beliefs and thinking thoughts.

65/ But let’s return to the problem of context-switching after that long digression. I said earlier that multi-tasking is a misframing of context-switching/fast transients.

66/ As David Allen likes to say, we never multi-task anyway. Instead we do very rapid context switching that creates the illusion of multi-tasking. Just like computer processors switching between virtual machines.

67/ So one way to measure the effectiveness of your context switching is to ask how many parallel threads of execution can you create the illusion of driving in your life?

68/ Our shifting metaphors for this reveal how technological leverage has evolved. I grew up with the metaphor of spinning plates, which kinda reflects project/process level parallelism.

69/ In the pre-mobile era, an executive's parallel bandwidth was the number of projects/process "plates" (in the traditional rather than GTD sense) they could keep spinning.

70/ For senior execs, this often meant number of reports: one report per lumped responsibility. The bandwidth was about 8: the so-called "span of control."

71/ Not coincidentally, this was close to the famous Magic Number 7, plus or minus 2. That's the natural human limit of parallelism at traditional "project" levels of abstraction.

72/ With the mobile era, we began thinking in terms of encounter-level parallelism. This is useful if everything you do happens in meetings, via indirect influence on others through things you say.

73/ Think beyond regular workplaces. Teachers handle between 15-40 parallel “student” threads via joint meetings. Pagerbot doctors in the US can be responsible for over 2000 patients. 

74/ This kind of scaling depends on a lot more belief being externalized into artificial belief systems. Education and healthcare are full of such externalized beliefs. 

75/ Classrooms and textbooks believe vast numbers of things about students on behalf of teachers. Hospitals and clinics believe a vast number of things about the right way to care for patients.

76/ Historically, powerful artificial belief systems have been restricted to highly specialized domains where a lot of shared, codified orientations and mental models could efficiently exist.

77/ A hospital embodies many orientations — emergency responses, ICU routines, surgical preparation — each with dozens of mental models involved. 

78/ But a doctor or nurse only has to do a fraction of the necessary work of context switching from situation to situation. There is a reason doctors/nurses are a good candidate for bot-izing: they are already bot-ized to a large extent.

79/ More generalized, less codified work environments offer less leverage for encounter-based parallelism. A good blackberry+admin supported executive could handle perhaps 20 general parallel threads.

80/ Today, with an iPhone, a belief-bot like Siri, and an admin assistant, and the right set of information context tools, an executive can handle perhaps 40 general parallel threads.

81/ But we’re seeing a whole new era of support systems that are about to create an exploding capacity for artificial belief — and it will be available to all of us.

82/ Deep learning systems are effectively artificial belief systems. The fact that they inherit the biases of their programmers/users is not always a bug, it can be a feature.

83/ Think about it. Wouldn’t you like to have an artificial belief system that believes many of the same things as you, and in some cases instead of you, and makes decisions as you would?

84/ To take the most politically incorrect case, let’s say you’re a racist bigot, as is your right to be. What kind of artificial support would you need to be the best racist bigot you could be?

85/ A Siri++ that can not just learn your accent, but the “accent” of your judgment biases, can super-power you. This is NOT about execution automation (doing things as you would). This is about believing things your way.

86/ Other technologies are converging on this future. Among these are conversational interfaces, the blockchain (as a consensus and commitment technology) etc. 

87/ One reason I've undertaken the Q Lab experiment (with 15 parallel threads  of project support compared to my regular consulting bandwidth of 4 parallel gigs) is to really push this as far as it can go.

88/ I’ll have more to say on these matters in future newsletters, but the big takeaway is this: we need digital assistants to evolve into full-blown electric monks; there's too much believing work to do.

89/ When you increase your leverage by having artificial believing systems do more of your high-inertia believing work for you, you can be more like a lightly flitting bird.

90/ You flutter from process to process, a light touch here, a gentle nudge there. Mind like water. Ridiculously fast transients not because you’re a martial artist but because all your belief baggage is checked in.

91/ How powerful can such a support system get? In terms of number of simulated parallel threads of execution, with the right technologies, I don’t see why all humans couldn’t get to ~2000 or more.

92/ Doctors already do this level of parallelism. Yes, it is highly stressful, error-prone, and has all sorts of problems, but we have an existence proof that it is conceptually workable. 

93/ This is what I mean by “I want to turn myself into a bot.” Outsource as much of my belief work as possible to automated believing systems molded to my brain.

94/ Why? So I can become a hyper-fast-transient context-switching specialist. A pure operating system. I'd like to turn into a set of recursive COO-bots, all the way down.

95/ And I don’t want to do this the hard way by spending 10 years meditating towards no-mind, or learning 10 styles of martial arts so I can get to the style-of-no-style. Meh.

96/ A human being should be able to switch context, switch context, switch context, switch context, switch context, switch context, switch context. Belief work is for electric monks.

97/ Nor am I a transhumanist in the sense of particularly wanting to replace/augment myself with artificial parts or upload myself to the cloud (I don’t think the latter is philosophically possible).

98/ If there’s an ideal that defines this aspiration to transcendence-by-bothood, it’s that you make yourself informationally tiny. To paraphrase Paul Graham, keep your informational identity small.

99/ If you get it small  enough to be close to no-mind/no-style, perhaps you can experience the richness of the universe in all its gazillion-threaded parallel glory.

100/ And do so without having to go to all the trouble and inertia of believing things about it. In a hundred years, that will seem like an utterly barbaric way to use a brain.

Feel free to forward this newsletter on email and share it via the social media buttons below. You can check out the archives here. First-timers can subscribe to the newsletter here. You can set up a phone call with me via my Clarity.fm profile page

Check out the 20 Breaking Smart Season 1 essays for the deeper context behind this newsletter. If you're interested in bringing the Season 1 workshop to your organization, get in touch. You can follow me on Twitter @vgr
Share
Tweet
Forward
Share
Copyright © 2017 Ribbonfarm Consulting, LLC, All rights reserved.


unsubscribe from this list    update subscription preferences 

Email Marketing Powered by Mailchimp