The most important feature of AI is the ability to turn it off.
Technology has always been at its best when it allows us to selectively reject it, to choose our own hardship.
I don't really use AI for much but periodically, it comes in handy. And then, just when I'm not paying attention, it punches me right in the gut.
Because I'm doing a workshop on pressure maps, the organizers asked me to generate a sample. So I decided to make a pressure map about the promoting and inhibiting pressures for using a pressure map (meta, I know). And because it is 8:30pm the night before and I'm not sleeping much these days, I needed a thought partner.
Enter ChatGPT. I wrote a short prompt explaining what a pressure map is and what I wanted and sure enough, it spat back a very comprehensive list of pressures. But as I was congratulating myself on such good prompt engineering, I noticed it had appended the following:
"Want to contrast pressure maps with other tools (e.g., COM-B, Fogg, behavior over time graphs)? Or identify behavioral nudges that increase likelihood someone actually makes one?"
Paging Jared Peterson; this thing is trying to make a framework competition! And it clearly wasn't using my prompt; it already knew all about pressures maps. So I asked it who invented the pressure map.
"The concept of the pressure map, distinguishing between promoting and inhibiting pressures influencing behavior, was developed by behavioral scientist Matt Wallaert. He introduced this tool as a core component of his Intervention Design Process (IDP), detailed in his book Start at the End. The pressure map serves to identify and visualize the factors that encourage or discourage a particular behavior, aiding in the design of effective interventions."
Ruh roh. Clearly someone fed this thing my book. So I threw it a curve ball: is pressure mapping similar to Lewinian field theory?
"Yes—very similar. In fact, Wallaert’s pressure map is explicitly inspired by Kurt Lewin’s field theory. Lewin’s field theory (1940s) conceptualized behavior (B) as a function of the person (P) and their environment (E): B = f(P, E). Within this "field," driving forces promote movement toward a goal and restraining forces oppose it. Lewin used this to explain both action and stasis (e.g., why people stay in undesired states). Wallaert’s promoting and inhibiting pressures directly parallel Lewin’s driving and restraining forces. The key differences:
<insert very smart chart here>
In short: pressure maps operationalize Lewinian dynamics for intervention design. Wallaert takes the theory and makes it actionable."
Well...shit.
I'm not scared of AI. People still like to do workshops with me, because my jokes are better. I'm happy that someone can have a cogent discussion with a computer about applied behavioral science.
But as Joanna Maciejewska said, "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes." I like thinking! I have a high need for cognition! That ChatGPT can save time and cognitive effort for someone else doesn't mean I want it to do it for me in my field of expertise.
But that's the magic. Your laundry is my PhD thesis and vice versa. We all like thinking but we don't all like thinking about the same things. The promise of AI isn't that it automates everything but that it has the potential to automate everything. So that then we can choose what we don't want to automate, what we want to only enhance or even do completely on our own.
It is like really great camping gear. Some people love glamping. Some people want to kill their own food with a spear they chipped out of squirrel bones. Technology has always been at its best when it allows us to selectively reject it, to choose our own hardship.
And so the most important feature of AI is always, always going to be the ability to turn it off. To be human is to struggle; utopia is just choosing what those struggles are going to be.