Blog

A Collection of Short-form Articles,
Project Write-ups, and Unfinished Essays

Against Special Casing

AI and a Future for Open-Ended Software

November 14, 2024

AIBiologyScience

Part 1: The Mind and Body

When we think of the mind, we think of the human mind. When we think of the body, we think of the human body. We struggle to project ourselves into the experience of another mammal such as a bat1, let alone experiential modes even more distinct from our own.

Our understanding of intelligence, of what constitutes an intelligent being, and even what defines an individual, are all deeply rooted in our knowledge of ourselves and our close animal relatives.

The human brain is just one of many network architectures that occur in nature. The lifespan, procreative nature, and tissue-regenerative capacities of the human are also representative of a narrow corridor of life on this planet.

Our human-centric mental model for the notions of mind and body forms pervasive analogies that in turn guide the design of our technological systems - AI models (minds) and their corresponding interfaces (bodies).

These analogies will serve only to limit software’s evolution.

While the comparison of AI to the human mind has its own drawbacks, I am here concerned with the body analogy.

Rather than vertebrate minds and bodies, we should take inspiration from other forms of life, such as octopi, or better yet, fungi, which offer alternative models of intelligence and adaptability.

In Entangled Life, Merlin Sheldrake notes that “animals put food in their bodies, whereas fungi put their bodies in the food.”2

We have long been dealing with animals. Word, Photoshop and Maya are all vertebrates. They have rigidly-defined bodies: workspaces, panels and menus. They move documents and data through well-defined pathways and organs. The individual is not adaptive, or is minimally so. Change instead occurs over generations of the software.

Species that wrap themselves around their food, that become enmeshed in it; species built of largely undifferentiated material that can be spun into special purpose material (such as a mushroom) when the time is right - these are biological analogies that will lead software into a new age.

Part 2:

Computers in the 1980s and 90s, when user interfaces first came to life, were quite impoverished by today’s standards. Software developers often resorted to hardware-specific optimizations that could make all the difference for shipping but did not generalize or age very well.

Graphics functionality including user interface rendering was often the focus of these heroic programming efforts due to inherent conflict between the computational complexity of graphics calculations and the need for real-time interactivity.

Under these conditions, it was a miracle to ship anything at all and certainly no one had the luxury to concern themselves with the ways in which statically defined software and interfaces would limit the user.

...

1 Nagel, Thomas. "What Is It Like to Be a Bat?" The Philosophical Review, vol. 83, no. 4, Duke University Press, Oct. 1974, pp. 435-450.
2 Sheldrake, Merlin. Entangled Life: How Fungi Make Our Worlds, Change Our Minds & Shape Our Futures. Random House, 2021, p. 51.

The Revolution in Methodology

Transforming Science, Engineering + Design through AI

September 14, 2023

AIMethodologyScienceEngineeringDesign

Product development at a modern technology company is driven, almost pathologically driven, by use-cases.

What is this thing for? That really does seem like a good question, doesn't it? It's not and I'll explain why. At some point in the 2010s or maybe before, designers started inserting the word 'empathy' into everything they said about their practice. You couldn't get away from it. It was all a pretense though. The notion that each of your decisions was made from a position of “user empathy” is really quite self-involved when you think about it. It is an assertion that you, as a trained designer, possess the rarified talent of being able to understand and share the feelings of another. It is paternalistic. So are use-cases. They suppose that you know what this thing is for. But nothing is actually for anything. Things exist and people find utility in them. Sometimes people have to go out of their way to address specific utilizations by synthesizing or combining things. But the things themselves do not belong to those utilizations. They contain multitudes. We learned this about art last century. Art doesn't just mean what the artist means by it, it means what we mean by it too. So I think use-cases are highly overrated.

I prefer concise strokes that beget infinite possibilities.

The shockingly minimal criteria required to achieve Turing completeness - there's one that's done wonders for us. Though, as Alan Perlis famously put it, "Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy."

It turns out that within that concise stroke (the programmable computer) another even more stunning one becomes possible.

DeepMind’s mission statement puts it best:
Solve Intelligence. Use it to solve everything else.

This is the best concision-to-possibility ratio we'll ever see.

Designing for a Positive Sum

April 26, 2023

DesignEngineeringAI

Design work today takes a fundamentally zero-sum view of its own raison d'être. The notion of a user pain point is rooted in the assumption that design’s purpose is to neutralize any adversity experienced by the user. This goal was misguided from the outset - adversity is, after all, a potent catalyst for innovation. But with the rapid acceleration of artificial intelligence capable of automating an ever-growing list of complex tasks and removing the last shreds of adversity from our workflows, we must take design in a new direction.

Technology should be a force for positive-sum transformation. It enables us to extend our reach - to build higher and probe deeper. As designers, we have an incredible new opportunity and an important responsibility: to shape this technology, to design our creative counterparts and to set our collective sights upon the creation of an ever-brighter future.

Machine Learning, Ahistorical Design & the Limits of Conventional Wisdom

January 6, 2018

DesignEngineeringAIMachine Learning
“Rocket Boy” by Damon Lehrer
“Rocket Boy” by Damon Lehrer

Paper is the ultimate open-ended material. We didn’t become any freer when we went to computer-based design.

Shanghai Tower exhibit page
Shanghai Tower

To the contrary, computers help us to think bigger because they enable us to scaffold our constraints.

CAD tools allow us to model physical and material properties that help us to test whether an idea will work in the real world.

Through this, we can probe and scaffold our way towards the building of things that are too complex to model in our heads.

The Parthenon in Athens
The Parthenon in Athens

But, herein lies the problem…

Since we cannot easily reason or speculate about what might work in such complex domains, we can only probe new possibilities through a kind of guess-and-check interaction with the modeling software.

Naturally, these probes must originate from something we understand and can imagine: the conventional wisdom handed down through the ages.

If we could see outside this lineage and explore the full possibility space made accessible to us by modeling tools, we might find incredibly elegant or optimal new solutions to countless problems.

Unfortunately, it is virtually impossible for us to see outside of the historical lineage in anything other than an iterative (and therefore historical) manner.

Then suddenly, machine learning comes along…

Now we not only have machines that can simulate the world, we also have machines that can speculate about it and search the possibility space in a completely unemotional way — with no regard for how others have solved problems in the past and guided only by a quest for the optimal solution.

DeepMind blog: AlphaGo Zero
DeepMind's AlphaGo Zero

To understand the impact of this, let’s look at the recent accomplishments of a board-game-playing artificial intelligence.

People have been refining strategies for the game of Go for nearly 3,000 years. Then DeepMind built a Go-playing system and challenged the world’s best human player.

Traditional strategy emphasizes moves along the third and fourth lines. In the match, AlphaGo played a surprising move on the fifth line. The commentators suspected a blunder. Lee Sedol paused to regroup. Only later did the depth of that move become clear: it was historically unusual yet positionally sound.

In the aftermath, professionals revisited their understanding of the game, giving more consideration to ideas around the fifth line. DeepMind then introduced a new version, trained without exposure to any human-played games. This system beat the prior version 100–0. The lesson is clear: conventional wisdom isn’t always optimal.

Unité d’Habitation at Briey
Unité d’Habitation at Briey

This kind of ahistorical capacity is a very powerful thing. But, in many realms it cannot stand on its own.

As Minimalist and Brutalist architecture suggest, a designer’s effort to optimize a system is not always well-aligned with the true needs of the user, who must often circumvent the system’s design in order to perform their duties within it.

In domains like architecture and design, machine learning can provide a powerful mechanism for seeing beyond of conventional wisdom.

But we still need human architects and designers to mediate, curate and synthesize the machine’s ahistorical insights into creations that will be useful to and understood by humans.

Mood Board on Pinterest
Mood Board on Pinterest

In this sense, we should not think of machine learning as our replacement. We should see it as a kind of mood board, a new source of inspiration.

This article is based on a recent talk I gave as part of The American Institute of Architects’ Technology in Practice “Building Connections Congress 2018” event, which was chaired by Natasha Luthra.