Copy
AI for better information architectures, plus Léonie Watson on accessibility and other things worth your attention.
INFORMA(C)TION — February 5, 2023
AI for better information architectures, plus Léonie Watson on accessibility and other things worth your attention.
Hello! I'm Jorge Arango and this is INFORMA(C)TION: a weekly dose of ideas at the intersection of information, cognition, and design. If you like this email, please forward it to a friend. And if you're not subscribed, sign up here. Thanks for reading!
Colorful shapes against a gray field. One of them looks a bit like a robot with a hexagonal head.
Image by Stable Diffusion

Artificial intelligence for information architecture

There’s currently a lot of hype about artificial intelligence. In the last few weeks, I’ve seen takes ranging from wildly overblown speculation to outright dismissal. Both extremes are wrong. (But not surprising, given the mix of inscrutable new technologies and uncertain economic and labor markets.)

Part of the problem lies with the term ‘artificial intelligence’ itself, which creates unrealistic expectations. Although ChatGPT and its ilk sometimes appear to converse like humans, they’re not artificial general intelligences (AGI) like HAL 9000.

Which is to say, these systems don’t reason like humans. But that doesn’t mean they aren’t helpful. (Or harmful — more on that below.) To avoid this semantic trap, I’ll refer to these things as large language models (LLMs) rather than AIs.

Like all technologies, LLMs have pros and cons. The question is: how can we exploit the pros while avoiding the cons? The answers vary depending on what field you’re examining.

Let’s focus on information architecture. By IA, I mean structuring systems so people can find, understand, and accomplish things. You experience IA in the navigation choices you see in websites and apps, category filters in product catalogs, a book’s table of contents, and other such organizational and wayfinding aids.

I’m spelling this out to emphasize the distinction between IA and the content you see in these systems. It’s easy to see how LLMs might disrupt the business of content production. Given the right prompts, tools like ChatGPT can produce passable first drafts much faster than human authors.

But can they also organize the system’s structure? My sense is LLMs can’t yet replace humans at this task, but they can help us do it faster, more efficiently, and better.

What can (and can’t) LLMs do?

To understand why, it’s worth digging into what these things are and how they work. LLMs are neural networks trained on massive amounts of text. The goal is to predict the next word in a sequence within a particular context. If you string enough statistically relevant words together, you get what often seem like reasonable statements.

When understood at this level, it’s easy to see why LLMs are better at some things than others. Among things they do well are summarizing and translating texts, determining sentiment (whether a text is ’positive’ or ’negative,’) and answering questions about concepts that appear in the training data.

What they don’t do well is reason. This is why you read about ChatGPT ’hallucinations’ that sound convincing but convey nonsense or lies. The effect is a sort of cognitive uncanny valley: the system seems eloquent and self-confident but can’t connect dots humans take for granted. (At least not yet.)

So, the key to using these things effectively is to assume they lack human intelligence. Instead, what they bring to the table are text analysis and processing superpowers.

Whether that constitutes ’intelligence’ is for others to debate. I’m more interested in how super-powerful text processors might help create better IAs for people. And there’s lots of potential here. Let’s consider possibilities by focusing on three stages of the IA design process: research, modeling, and production.

Research

At the start of a project, you want to understand the content, context, and users of the system you’re designing. LLMs can help.

Consider how long it takes to audit a large website’s content. An LLM-powered program could visit every page, write a brief summary, note patterns, find outdated content, etc., much faster than a human. Another promising area for research is performing sentiment analysis on interview transcripts, which would make journey maps more credible and useful.

In both cases, an LLM could improve the quality, quantity, and speed of data that feeds into the (human-led) design process. I’m already experimenting with LLM-driven summaries of page contents and plan to do more in this area in the near term.

Modeling

I suspect LLMs can also help in the modeling process. In the context of IA, this means synthesizing research data and design directions to define the system’s core distinctions (concepts, categories, taxonomical terms, etc.) and their relationships.

We know LLMs can categorize texts into particular pre-selected topics. The question is whether these systems can also suggest topical categories on their own, perhaps augmented with categorical prompts from card sort data and such. This is a direction I’m beginning to explore as well.

Production

By ’production,’ I mean translating abstract models into things people can use, such as screen-level prototypes. I haven’t yet seen a proof-of-concept in this direction, but expect a model could be trained on a corpus of web pages and app screens to output UI-level components and layouts.

Paired with a well-structured design system, such a model could (in theory) quickly produce lots of variations for evaluation. Again, I don’t expect such a system would replace human designers in the near term but instead generate drafts them to start iterating.

What is currently feasible is converting data structures to and from different formats, which could automate several production areas. See this video from Rob Haisfield for a glimpse at the possibilities. It’s exciting stuff!

Looking to the future

While most of what I’ve mentioned so far seems feasible now, it’s harder to predict what other capabilities might become available further out. For example, I’ve seen speculation that future LLMs might produce bespoke experiences tailored for each individual in real-time. Such systems could be irresistibly persuasive and, of course, ripe for abuse.

While theoretically possible, I suspect generating one-off experiences on the fly isn’t feasible in the near term. The computational costs are too high in terms of time and resources. Instead, we’ll more likely see augmentations (as opposed to disruptions) of the traditional design process in the near- and mid-term along the lines sketched above.

A call for cautious optimism

Like all new technologies, LLMs can be used for nefarious ends. But they also hold great potential. It’s wrong to assume they’ll replace you soon, but it’s also wrong to dismiss them. It behooves you to explore the possibilities, since it’s still early enough to steer them toward humane ends. Designing better IAs is one of them.

I’m more excited about LLMs than I’ve been about any new technology since the web. That’s saying a lot; I left my career in architecture after seeing the web! But LLMs don’t make me want to leave design. Instead, I’m adding them to my toolbox to design better digital systems faster. I’ll share with you what I learn.

From my work

The Informed Life with Léonie Watson
An overview of my conversation with an expert on the role of accessibility in producing better experiences for everyone.

Building a Personal Knowledge Garden
Video of a presentation about personal knowledge management that I delivered in UX Lisbon in May 2022.

Also worth your attention

Commoditizing design?
Cornelius Rachieru on how commoditizing design (e.g., by using design systems) is hurting the discipline amidst a tightening job market.

Intro to transformers (YouTube)
A short, understandable video intro to transformers, the technology behind LLMs like ChatGPT.

Taxonomies vs. ontologies
Heather Hedden on the essential differences between two key — and often conflated — terms in information management. Must-read if you’re getting started with information architecture.

ChatGPT + Bing?
A leak gives hints at Microsoft’s GPT-3-powered Bing UI. It’s clear LLMs will change how we use search engines. (And their business models, natch.)

Early tools for thought (YouTube)
Video of Mark Bernstein’s recent talk about the early history of hypertext. Highly recommended, especially if you’re into tools for thinking and information architecture.

Medieval attention
How monks in the Middle Ages managed their attention (lest they lose their souls.) (H/t Karl Fast)

Tools and techniques

Obsidian + Alfred
If you use Alfred on macOS, this trick will help you quickly switch between multiple Obsidian vaults. (It’s generalizable to other interactions.)

Map your book
In a previous issue of the newsletter, I wrote about how I use notes to map out my books. This post shows you how to create a structure to guide readers through your nonfiction book. (H/t Harry Max)

Thinking tools
100 thinking tools from the folks at Ness Labs. (H/t Deepculture)

Magician for Figma
LLMs can help UI designers work faster. This Figma plugin is meant to speed up screen-level work. (H/t Benedict Evans)

IA for PKM
How to use thesauruses, taxonomies, and ontologies to organize your personal knowledge.

Upcoming events, workshops, etc.

IA Essentials workshop
Mar 29 — I’m teaching a live one-day version of my Information Architecture Essentials workshop at the IA Conference in New Orleans.

Parting thought

"The link is the most significant new punctuation since the medieval invention of the comma."

— Mark Bernstein

Thanks for reading! 🙏
P.S.: If you like this newsletter, please forward it to a friend. (If you're not subscribed yet, you can sign up here.)
Disclosure: This newsletter may contain Amazon affiliate links. I get a small commission for purchases made through these links.






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
Boot Studio LLC · P.O. Box 29002 · Oakland, CA 94604 · USA