What Systems Optimize For — and What They Don’t
#reflection #systems #leadership #society
Parts of what follows grew out of several older posts I wrote on different topics that, at first glance, did not seem to belong together. Looking back, I realized they were circling around the same question from different angles. Instead of rewriting those posts, I decided to keep them as they are and try to synthesize the bigger pattern here. Over time, I started noticing the same pattern in very different areas of life.
In education, in technology, at work, and even in private life, systems tend to drift away from what builds long-term capability, resilience, and alignment. Instead, they optimize for what is easier to reward, easier to explain, or simply more attractive in the short term.
This rarely happens because someone planned it that way. It usually happens because incentives, habits, convenience, and social expectations quietly pull in that direction. Locally, the choices make sense. From a distance, they often look less convincing.
A while ago, I attended an information evening at my son’s school. Students could choose additional subjects based on their interests and possible future aspirations. The options included languages, natural sciences, engineering, and computer science.
What struck me was not the quality of the presentations, but where the attention went. The strongest interest seemed to go toward what felt culturally attractive, familiar, and easy to imagine, e.g. “If I learn Spanish, I'll be able to get better information from the locals when I travel around in South America.”. The more technical subjects, which demand more effort but also build deeper capabilities, generated less excitement.
This is not meant as criticism of individual choices. People react to the incentives and narratives around them. But it reminded me of something that shows up elsewhere too: systems often reward what looks good on the surface while undervaluing what is slower, harder, and more foundational. In engineering, this would be obvious. If you keep optimizing the visible layer and neglect the structure underneath, sooner or later the system suffers.
Typical example I noticed very often is that you celebrate a team if they are able to produce a fix for a found bug quick. Which is of course commendable, but it's the same people who made the mistake in the first place. On the other hand teams that dealt with their bugs quietly and did their job properly, usually do not get the same level of visibility or appreciation for that matter.
A similar thing happens inside us.
For a long time, I thought of happiness as some kind of ideal state — a place where work feels right, private life is balanced, and the overall direction makes sense. The problem with that idea is not that it is wrong, but that it can become a distorted benchmark. Normal life starts to feel insufficient.
Over time, I became more interested in a different word: contentment.
Not as resignation, and not as lowering standards. More as a kind of inner stability. The ability to live with an internal critic without letting it dominate everything. The ability to accept that meaningful work, relationships, and responsibilities usually happen under imperfect conditions.
That feels closer to reality, especially in engineering and leadership. Projects do not become easier because uncertainty disappears. Teams do not grow because everything is smooth all the time. Most of the time, progress comes from staying oriented, noticing smaller moments of alignment, and continuing without demanding perfection from every phase.
The same pattern becomes visible when looking at technology on a larger scale.
In Europe, there is a lot of discussion about digital sovereignty, and rightly so. In many areas, the technical tools needed for more independent and privacy-conscious digital infrastructure already exist. Open-source solutions are mature enough for many use cases in education and public administration.
And yet, dependencies remain strong. Children learn “computer literacy” inside ecosystems that lock them in early. Public institutions rely on structures that are convenient in the short term but make long-term independence harder. Industry often seeks speed through partnerships while slowly giving away capability.
The interesting question is not whether cooperation is good. Of course it is. The more interesting question is what a system is optimizing for when it repeatedly chooses dependency over capability, convenience over resilience, or speed over long-term autonomy.
From an engineering perspective, this is not surprising. If resilience is not made an explicit goal, most systems will optimize for short-term constraints.
A few years ago, after my family suffered a significant personal loss, the same pattern appeared much closer to home.
I started asking myself uncomfortable questions. How much time is left? Am I happy with my life? What am I actually trying to achieve? And whose goals am I pursuing?
I did not find neat answers to all of these questions. But one thought became much clearer than the others:
I want to live a long life and enjoy that time with the people I love.
Put next to many of the goals I had accumulated over the years — earning more, staying ahead of every new technology, optimizing myself endlessly, becoming more efficient, consuming more experiences — some of them started to look suspiciously external.
Not wrong, necessarily. But not fully mine either.
That was uncomfortable to admit. A lot of what I had accepted as “normal goals” had quietly been shaped by a system built around productivity, optimization, comparison, and consumption. Only when that frame was interrupted did I begin to ask whether the system I was living inside was actually aligned with what I valued.
Across these examples, the topics are different. Education. Emotional life. Digital infrastructure. Personal priorities.
But the pattern feels similar.
Systems optimize for what is visible, rewarded, and immediately legible. They do not automatically optimize for what is sustainable, meaningful, or capability-building over time. Unless those things are named explicitly and chosen repeatedly, they tend to get crowded out by easier metrics and more attractive narratives.
This is true for institutions, organizations, teams, and individuals.
- A team can optimize for activity instead of progress.
- A company can optimize for reporting instead of learning.
- A person can optimize for image instead of alignment.
- A society can optimize for convenience instead of resilience.
None of this happens because people are stupid or malicious. Usually it happens because systems are powerful, time is limited, and immediate rewards are persuasive.
That is why I find it useful, from time to time, to step back and ask a simple question:
What is this system optimizing for?
And then maybe the harder one:
Is that still what I want to optimize for myself?