51. Uncertainty Regulation as a Basic Drive 🔗

September 14, 2020
In which I hypothesize that 'regulate uncertainty' is a basic drive alongside pleasure-seeking and pain-avoidance, explore my own preference for improv over planning as uncertainty management, and argue that the three grand strategies are spend more, do less, or think harder — only the last keeping life interesting.
🔗
I've heard of 3 hypothesized "basic drives" for living things: seek pleasure, avoid pain, minimize energy. I don't know that any of them is based on anything more solid than Freud-level hand-waving, but I think there's a fourth one: regulate uncertainty.
🔗
As in, arrange behaviors, intentions, expectations, actions to keep uncertainty in a band around a set-point that's comfortable to you.
🔗
I've often thought of myself as being driven primarily by the "minimize energy usage" drive (alternately, "path of least resistance") but I think "regulate uncertainty" is a better description.
🔗
There's 3 things you can do to regulate uncertainty: increase risk taking, decrease risk taking, and change your operating beliefs. The last one is the most interesting one since it is the foundation of all subjective construction. We design our delusions to regulate uncertainty.
🔗
I tend to shut down at fairly low levels of uncertainty compared to many people, especially in certain areas. Bureaucratic uncertainty is especially toxic for me. If I'm waiting for more than 3-5 things like some government document or a financial or opsec thing, I freeze
🔗
I regulate this primarily by simply minimizing my surface area for such things. So for eg. I never got into SBIR/STTR type funding sources for small businesess, and tend to avoid using any service that involves interacting with highly involved UXes.
🔗
There's a hierarchy here. Minimize energy use is the foundational drive, but is rarely directly triggered, since we're rarely at the limits there So in practice regulate uncertainty is the dominant one. It modulated by our sense of the maximum energy we could put out
🔗
So for example, I react to the potential for traffic delays by getting to the airport extra early, but that in turn is driven by my sense that I hate sprinting for the gate, and extra-hate the energy demands of rearranging plans if I miss a flight. I do not like having to spike.
🔗
"Minimize energy" is a very illegible optimization problem, since our energy efficiency is a complex function of physiology, cognitive style, and output efficiency. I'm more wasteful in sprint/spike mode. Otoh, I both hate planning and am inefficient/chafing in working to one.
🔗
So I end up solving for minimal energy use/least resistance by a) adopting a highly improv style b) building in lots of time for everything, so I can figure things out by trial and error in a relaxed unhurried way. I like to iterate, but not fast.
🔗
A preference for improv over planning is about narrowing the uncertainty band (improv uses more up-to-date info) but also about limiting rework energy (since waterfall plans need higher-energy reworking when they fail, where improv is typically just 1-step backtrack)
🔗
This seems like a good mathematical version of it tweet[1]
🔗
I like Boyd's version, which is more positive/generative than my own
@antlerboy It's also the core to boydian thought if you squint a bit... in destruction and creation he proposes something in that spirit https://t.co/3TuGKBgLwG
🔗
I’m always curious about how people turn complex utility functions into simple behavioral heuristics. For example, catching a ball in a sort is arg_max_(running trajectory){P(catch the ball(running trajectory)

The way people solve it is “run to keep ball at constant angle”
🔗
The solution to “regulate uncertainty within this band” seems to be “don’t make a plan with more than n high-uncertainty actions in sequence”

The act of padding with delays is to break up fragile action sequences.
🔗
So if n=2

“Drive to airport” —> “Get through security” —> “Board plane”

turns into

“Drive to airport” —> “Get through security” —> “Wait around 10-60 minutes” —> “Board plane”
🔗
There’s actually a set of uncertainty reducing behaviors. One of mine is “don’t drive unless you must,” since I don’t enjoy it much and tend to make time-stress mistakes like missing exits. So taking transit or uber turns high uncertainty action into low.
🔗
Thinking more about this, the reason uncertainty regulation is so important is that its flip side, failure, can have immediate consequences. If you have an accident driving to the airport you have to deal with it. You can’t defer the consequences. Failures are potential forks.
🔗
Some failures are containable. If you schedule an hour to debug a program and fail, you can say “I’ll take another shot at this next week” and go on to other things. But it’s hard even when it’s possible. Failure drains energy and you want to warn it back immediately.
🔗
If you continue debugging for another hour, you don’t lose the sunk cost of getting situation awareness for that coding session. If you kick it to next week you can’t just pick up where you left off. You have to reboot. Pay the situation awareness cost again.
🔗
So regulating uncertainty is about containing failure forks. Adding slack helps with this.

Parallel uncertainty is worse but rare in personal decision-making.
🔗
Hmm. Alt formulation: “keep the number of forks/futures you have to consider in time period T non-trivially below n”

The branching factor of a time interval
🔗
One way to manage uncertainty is to expend a lot of energy to control all sources of uncertainty in the environment. This is the rapid monopoly growth strategy of unicorn companies. Commodity your complements, acquire all competition, buy up supply and distribution.
🔗
For individuals, this translates to get rich/fu$. Money is energy in modernity.

Two more accessible (and in many ways more interesting) strategies that can work with limited budgets are robustness (simplify/minimalize life) and adaptability (reorient OODA loop faster)
🔗
Robustness or antifragility (distinction is irrelevant here) both attack uncertainty potential of modernity by striving for locally Lindy simplicity.

Waldenponding and other defensive postures are uncritical special cases of this.

All sacrifice performance for robustness.
🔗
Adaptability is the most interesting. You just try to think harder and faster. Get inside the OODA loop of the environment to pwn it rather than either dominate it or retreat from it.

Those are the 3 broad strategies for uncertainty regulation: spend more, do less, think harder
🔗
Each strategy has merits, but only the last one can make life steadily more interesting and continue the infinite game. The first two define winnable games and try to win them. If you fail, you’re destroyed. If you succeed, you’re in an arrested development cul de sac.
🔗
In last one: You view life as an asymmetric guerrilla warfare challenge.

Kissinger’s “The conventional army loses if it does not win. The guerrilla wins if he does not lose” principle.

Big upside of this is that you stay interested and interesting in the world. Not win+exit.
🔗
Any uncertainty regulation strategy that looks like win+exit is a kind of acting dead.

You can’t actually win against the universe in the end. We all die. Do you want to explore as much of it as you can in the time you have? Or hit a win condition and sit around feeling empty?
🔗
This might be the lesson of the Cobra Kai show.
Ch. −
ToCCh. +