Copy
How AI can augment your thinking, plus Amy Jiménez Márquez on IA leadership and other things worth your attention.
INFORMA(C)TION — December 11, 2022
How AI can augment your thinking, plus Amy Jiménez Márquez on IA leadership and other things worth your attention.
Hello! I'm Jorge Arango and this is INFORMA(C)TION: a weekly dose of ideas at the intersection of information, cognition, and design. If you like this email, please forward it to a friend. And if you're not subscribed, sign up here. Thanks for reading!
Photo of a toy robot
Photo by Rock'n Roll Monkey on Unsplash

Three Roles for Robots

Like many others in tech, I’m cautiously excited about AI. ChatGPT in particular feels like a milestone: the moment when these things went from intriguing curiosities to practical tools. If you haven’t played with it yet, give it a go.

Now, the critical question is: how will you use it?

Among other things, this question raises thorny issues about authorship and ethics. For example, many folks lament the end of essay-based school assignments. Some have noted that ChatGPT (and its ilk) are to text-based assignments as calculators are to math problems.

What does it mean for knowledge workers to have a tool that does for writing what calculators do for math problems? More pointedly, how do you use these things without losing your soul?

One way to go about it is to think of their role as being on a continuum ranging from “no impact whatsoever” to “total replacement.”

A line indicating a continuum between two extremes: ‘no impact‘ on one end and ‘replacement‘ on the other.

The “no impact” side is the world pre-AI: nothing changes. Obviously, this position is moot now. The other extreme (“complete replacement”) is not yet feasible — or, in my opinion, desirable. So, you’re looking at a point between these two.

There might several worth exploring. I’ve identified three, which I’ll map to three roles previously played by humans:

  1. AI as editor
  2. AI as ghostwriter
  3. AI as amanuensis

Let’s examine them in more detail.

AI as editor

This means using AI to check your work. It might include looking for spelling and grammar errors, something many of us have done for a long time without qualms.

The previous continuum showing a mark near ‘no impact‘ which reads ‘editor‘

For example, I’ve used Grammarly to check these newsletters all along and hadn’t yet felt compelled to share that fact. (I hope you don’t feel cheated!)

A more advanced system could alert you to gaps in reasoning or logical fallacies. Such a tool (Thinkerly?) would offer suggestions and let you decide what to do about them. Over time, it would hone your reasoning.

To wit, Grammarly has trained me to spot recurring flaws in my work, a more relevant and compelling context for learning grammatical rules than abstract scenarios. I’m a better writer as a result.

I expect Thinkerly would do the same to our ability to argue cogently. The ethics are a bit trickier than a grammar/spelling checker, but not by much.

AI as ghostwriter

This position is why essay-based assignments are going away. Here, you might have an idea and hand it over to the AI (perhaps as a set of bullet points,) which then composes the finished text for you.

The previous continuum showing a mark near ‘replacement‘ which reads ‘ghostwriter‘

Take a few minutes to ask ChatGPT to write a sonnet on a particular topic or compose a short essay on the merits of some position. It’s uncanny — and a bit terrifying, given the possibilities inherent in mass-producing convincing texts.

There are many ethical concerns with this position. I worry that unattributed AI-generated texts will soon flood our information environments. (One possible silver lining suggested by Ben Thompson: we’ll have to hone our BS detectors.)

I’m planning to avoid using the robots in this capacity.

AI as amanuensis

This position is perhaps the least clear from an ethical perspective — and the one that most excites me. I think of it as being somewhere in the middle.

The previous continuum showing a mark in the middle which reads ‘amanuensis‘

During the early modern era, scholars hired personal secretaries into their households. These so-called amanuenses mostly took dictation but also summarized and cleaned up texts, translated sources, compiled indices, followed up on references, ran errands, and more, freeing scholars to focus on more important things.

Over the last few weeks, I’ve used ChatGPT and GPT-3 as amanuenses. I think of them as research assistants and sounding boards: I feed them questions and sometimes snippets of my writing and ask for summaries or suggestions for further reading. Their responses help clarify my thinking.

For example, I asked ChatGPT to summarize the first part of this essay. Here’s what it responded:

The author of this essay is cautiously excited about AI and ChatGPT in particular. They discuss the ethical implications of using AI to assist with text-based assignments, and propose three ways in which AI can be used in the writing process: as an editor, ghostwriter, or amanuensis. The author believes that while complete replacement of human writing is not yet feasible or desirable, AI can be a valuable tool in the writing process.

This interaction validates the clarity of my writing. And asking for similar summaries of others’ writings helps me determine whether they’re worth my time. (I’m testing the Readwise Reader beta, which uses GPT-3 to summarize articles in this manner.)

While I’m not planning to pass AI-generated texts as my own (i.e., the ghostwriter position,) the thinking behind my writing is now augmented by interactions with AI assistants. They suggest different avenues for exploration, research materials, and even words and ideas I may have missed. (E.g., the phrase “sounding boards” above was suggested by ChatGPT.)

I’ve also started including AI-generated clippings in my notes, much as I include quotes and citations from human authors. (To keep things clear, I prepend Robot: to any texts produced by AIs.)

This is to say, even though my meaty fingers typed these words, the thinking behind them is a hybrid concoction. Which I think is OK — and perhaps even necessary, given the information overload we’re facing. But who knows?

One way or another, we’re blurring authorship in weird and interesting ways. At any given moment, we might employ these tools in one of these roles given our particular needs. Personally, I plan to avoid ghostwriting. But it’s a choice, and one we’ll all be called to make soon.

From my work

The Informed Life with Amy Jiménez Márquez
Notes on a conversation with a senior design leader on the role of information architecture in product design.

The Hair Metal Disclaimer
New tools are changing how we create. How will you respond?

Also worth your attention

Writing as magic
Marc Brooker on writing as a means for clarifying your thinking. Worth considering what we might be losing as a) we rely more on AI to process texts on our behalf and b) we communicate more via non-textual media. (I.e., video.)

How GPT screws up
“How can GPT seem so brilliant and so stupid at the same time?” Succinct insights into how this system does what it does — and how it fails. (H/t Tyler Cowen)

ChatGPT and the abacus
How does ChatGPT change computer education? Charlie Meyer: “we need to rethink what it means to learn how to program.” Meyer calls out a highly relevant distinction between complementary and competitive types of cognitive artifacts. Which are we dealing with now? (H/t Eric Knudtson)

Life-changing ideas
Seven ideas worth internalizing — mostly about de-centering yourself to acknowledge the limits of your understanding. Relative to all the knowledge in the world, we’re sipping through filaments from an unimaginably broad firehose.

Analog writing process (YouTube)
A brief (yet rich) peek into the process Victor Margolin used to write his two-volume World History of Design. Inspiring, especially if you're into analog note-taking. (H/t Chris Aldrich)

Metamodernism
Anne-Laure Le Cunff explains metamodernism, a philosophical stance well-suited to understanding the time we live in. I first learned about this concept from Andrea Resmini, who’s made similar points. Le Cunff’s essay is a good refresher.

Bottom-up thinking (podcast)
A conversation about bottom-up thinking between Gordon Brander, Sonke Ahrens, and Jerry Michalski. As Tyler Cowen would say, self-recommending.

Tools

Obsidian Canvas view
My preferred note-taking system is gaining spatial hypertext capabilities. I’ve been testing this feature since it was announced, and it’ll likely change how I use Obsidian.

Hello search
Search engines are among the most products that will improve fastest and most obviously via language models. I’ve previously highlighted search engines with ML-driven front-ends; here’s one that offers how-to answers for software developers.

Spidergram
A new website structural analysis tool from the folks at Autogram. I haven’t tried this yet, but it looks useful for analyzing large websites.

Silver Bullet
Like Obsidian, but in your browser and open source. Given the low barrier to entry, it’s potentially useful for teaching folks to make hypertext notes.

Futurepedia
A directory of AI tools, updated daily. (H/t Karl Fast)

Parting thought

Nothing is so painful to the human mind as a great and sudden change.

― Mary Wollstonecraft Shelley, Frankenstein

Thanks for reading! 🙏
P.S.: If you like this newsletter, please share it on Twitter or forward it to a friend. (If you're not subscribed yet, you can sign up here.)
Disclosure: This newsletter may contain Amazon affiliate links. I get a small commission for purchases made through these links.






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Boot Studio LLC · P.O. Box 29002 · Oakland, CA 94604 · USA