Why Agentic AI Will Make Organisational Psychology a Performance Function, Not a Wellbeing One
The productivity story about agentic AI is mostly true and almost entirely beside the point. The agents extend capacity, they handle the rote work, they free humans to do higher-value thinking, and most founders already understand all of this. What they are not paying attention to is that the introduction of agents into a company is, whether anyone intends it to be or not, a redesign of the human operating system underneath. It removes a set of accidental regulatory mechanisms the work used to contain. It changes the cognitive register of senior roles from execution to continuous supervision. It increases the volume of outputs humans have to evaluate while quietly shifting the comparison class from "people doing this job" to "people doing this job with their agents." None of this is a wellbeing problem in the conventional sense. It is an operating-system problem, and almost no one is yet building the function that would catch it.
I want to be clear up front that this is not a wellbeing essay. I am not arguing that founders should slow their agent adoption to protect their teams from discomfort, because the leverage is real and the productivity gains are real, and the companies that hesitate out of caution will lose to the companies that learn to use agentic systems well. I am arguing something narrower and operationally more interesting: that as agents make output abundant, the scarce resource inside AI-native companies becomes senior human judgment under load, and the organisations that protect and upgrade that judgment will outperform the organisations that merely automate more work. The cost of skipping the substrate layer is paid in exactly the part of the company hardest to measure on a dashboard, and the function required to prevent it is one that most frontier AI companies do not currently have.
The bottleneck moves
In pre-agentic companies, the bottleneck was usually production. There were not enough engineers, analysts, operators, designers, researchers, or executives to do all the work that mattered, and productivity tools helped because the central problem was throughput. Agentic AI changes the location of the bottleneck. Production becomes cheaper. Drafts, analyses, summaries, prototypes, plans, experiments, and recommendations become easier to generate, often faster than the organisation can intelligently absorb them. The bottleneck moves from output to integration: what to trust, what to ignore, what to escalate, what to stop, what to simplify, what to decide, what to leave unresolved. This is not just more work. It is a different kind of cognitive burden, and the people in the loop are no longer only doing the task. They are supervising a stream of partially completed, partially trustworthy, partially contextualised work produced by systems that do not tire, do not pause, and do not experience uncertainty in human terms. The human becomes the slow component in a fast system — the place where ambiguity, accountability, ethics, context, and prioritisation have to be integrated. That is a high-value role. It is also a high-load role.
When companies miss this distinction, they mistake increased output for increased intelligence. They see more work moving through the system and assume the organisation is becoming more capable. Sometimes it is. Sometimes the company is merely producing more material for already-loaded humans to triage — more summaries, more recommendations, more drafts, more dashboards, more decisions, more things that feel almost done but still require judgment. The danger is not that agents produce too little. The danger is that they produce so much that the human system loses its capacity to discriminate.
What changes when the agents start working
In a well-designed system, the standard productivity story — that agents free people to focus on higher-value work — can be true. During the transition, however, something more complicated happens, and it is worth describing it concretely because most founders I talk to do not yet recognise it in their own companies. Work that used to happen during the day starts happening overnight, because the agents do not sleep. The volume of decisions a single person is responsible for reviewing increases, because every workflow now creates outputs that need acceptance, rejection, correction, or escalation. Communication inflates, because each workflow produces summaries, status updates, risks, and next steps that someone has to read. The unit of work becomes harder to see, because one person is now supervising several pseudo-colleagues whose outputs interleave across projects and time zones. Their calendar may not look dramatically worse, but the cognitive surface area has expanded — more open loops, more dependencies, more second-order consequences, more unresolved questions held simultaneously.
At the same time, the implicit performance standard shifts. No one needs to say it explicitly. The comparison class quietly changes from "other humans doing this job" to "other humans doing this job with their agents," and the bar moves while pretending it has not moved. None of this is dysfunction in the dramatic sense. Most of it looks, on a given Tuesday, like ordinary professional adaptation. That is exactly why it accumulates. The team feels busier without feeling more productive. Senior people start describing their weeks in language that sounds like overwhelm but does not get named as overwhelm because the company is, by any external measure, doing well. By the time it shows up as attrition, conflict, or degraded decision-making, it has been in the system for months.
A category we do not yet have a name for
I think we need a name for this, because part of why companies miss it is that it does not match any existing category cleanly. It is not burnout in the classic sense, because there is no single overwork episode to point to. It is not under-resourcing, because the company has, if anything, given the team more leverage than ever. It is not bad management, because the managers are often doing exactly what the playbook says. And it is not merely a tooling problem, because the tool has changed the cadence of human work itself. It is something more like a structural mismatch between the cadence of the work and the cadence of the human nervous system, made visible by a system that removes the natural friction that used to enforce recovery.
The shorthand I have started using is agentic load, and I think it is a useful enough handle to share. Agentic load is what a human accumulates when they are supervising more parallel workflows than they can integrate, comparing themselves against a moving augmented baseline, responding to systems that do not stop, and doing all of this without the recovery rhythms that used to be built into the work by accident. It is the cost of being the slow component in a fast system, and what makes it dangerous is that it does not always present as distress. In high-performing environments it often first appears as speed, vigilance, irritability, shallow certainty, over-responsiveness, or the inability to stop — the kind of pattern that, in a culture that rewards heroic output, can be mistaken for elite performance until the senior person starts quietly looking for the door.
There is an organisational research literature on what happens to human cognition under chronic load, and the findings are reasonably consistent. Attention narrows. Information processing simplifies. People revert to familiar patterns. Communication becomes more defensive. Threat perception increases. The capacity to hold complexity declines. This matters in any organisation, but it matters especially in companies where senior people are making high-stakes, high-context judgment calls about non-deterministic systems, product direction, safety, deployment, governance, and capital allocation. I want to be careful not to overclaim here — the specific evidence base for this in agentic settings does not yet exist — but the underlying mechanisms are well replicated, and the inference seems to me reasonably safe: if the humans supervising powerful systems are operating in a chronically activated state, the quality of review, escalation, ethical reasoning, and risk perception degrades. In that sense, human oversight is not an abstract control layer. Human oversight has a nervous system, and the state of that nervous system becomes part of what determines whether the company can actually do the work it claims to be doing.
The lost function of friction
One of the most underappreciated changes in agentic work is the removal of accidental friction. Before agents, work contained recovery surfaces no one had designed: the wait for a colleague's reply, the slow turnaround on review, the genuine end of the workday because nothing meaningful would happen overnight, the meeting that ended ten minutes early, the delayed dependency, the commute, the pause between drafting and sending, the time it took another human to produce something. These were not always efficient, but they did regulatory work. They created gaps in which people could recover, reflect, integrate, and regain perspective. They limited the speed at which the system could pull on human attention. They created natural ceilings on decision volume. They gave the nervous system a rhythm.
Agentic systems remove much of this. The work continues, the draft is ready in seconds, the analysis arrives overnight, the next recommendation appears before the last one has been integrated. The company gains speed, but the human system loses the pauses that previously protected its judgment. This is not an argument for reintroducing inefficiency. It is the narrower argument that when accidental recovery disappears, intentional recovery has to be designed, and companies that remove the friction without replacing its regulatory function should expect the cost to reappear elsewhere — in rework, in conflict, in shallow decisions, in avoidant communication, in executive fatigue, and in the quiet departure of people whose expertise was essential but whose nervous systems could not adapt indefinitely to the new rhythm.
From execution to supervision
There is another shift that deserves more attention. Agentic AI moves many skilled people from doing the work to supervising the work, and this is often described as an upgrade. In some ways it is. A person with agents has more reach, more scope, and more leverage. But psychologically, supervision is not the same as execution. Execution can be absorbing. It provides direct feedback. It allows a person to experience mastery through contact with the work itself. Supervision is more vigilant. It requires monitoring, evaluating, correcting, anticipating, and deciding whether another system's output is good enough. It often fragments attention. It can increase responsibility while reducing the felt sense of craft.
This matters because many high-performing people regulate through absorbed attention. They recover not only by resting, but by entering states of deep engagement where they feel competent, focused, and in contact with the material. If agentic work strips away too much of that direct engagement, some senior people experience a strange paradox — greater leverage, but less mastery; more output, but less satisfaction; more scope, but less felt competence. Leaders sometimes interpret this as resistance to change. Sometimes it is not. Sometimes it is a signal that the organisation has redesigned the role in a way that removed the psychological conditions under which excellence was previously produced.
Talent density raises the cost of dysregulation
AI companies prize talent density, and rightly so. Exceptionally capable people can change the trajectory of a company. But there is a counterintuitive finding in the team-science literature that hiring exceptionally capable people raises rather than lowers the cost of poor coordination, because dependence on high-quality coordination rises with the capability of the people coordinating. Agents amplify this further. Each high-talent person now has more output capacity. Their decisions travel further. Their communication patterns propagate through more surfaces. Their preferences shape more workflows. Their blind spots scale. Their threat states scale. Their avoidant tendencies, over-control, perfectionism, or conflict patterns can be reproduced through the systems they supervise.
In a heavily agentic company, the cost of one dysregulated senior person is no longer linear in their seniority. It scales with their leverage. A threat-activated executive can degrade decision quality across surfaces they never personally touched, because their agents extend their style and judgment patterns at scale. A chronically overloaded product leader can create downstream churn through unclear prioritisation that propagates through every workflow they touch. A defensive engineering leader can turn ambiguity into unnecessary conflict across a much wider surface than before. A founder operating without recovery can confuse urgency with importance and transmit that confusion through the entire operating system. This is why organisational psychology becomes a performance function rather than a wellbeing one. The issue is not whether people feel good. The issue is whether the company can maintain clear perception, clean communication, trust, repair, and judgment under accelerating load.
Why existing functions often miss this
Agentic load does not sit cleanly inside any current organisational category, which is part of why it is being missed. It does not sit cleanly inside HR, because it is not primarily a benefits, engagement, or policy problem. It does not sit cleanly inside engineering, because the failure mode is not only technical. It does not sit cleanly inside operations, because the invisible variable is human cognitive load. It does not sit cleanly inside coaching, because the issue is structural rather than individual. And it does not sit cleanly inside wellbeing, because the business risk is not just distress; it is degraded decision quality at scale.
The missing function is closer to clinically informed organisational design — the capacity to read the human substrate of the company under load, and to distinguish between three quite different problems: a stressed individual operating temporarily below their best, a structural design issue masquerading as an interpersonal one, and a stable pattern that will continue to compound until it is addressed directly. These three require very different responses, and confusing them is one of the most common ways companies make agentic transitions worse. This function would not replace HR, operations, leadership, or management. It would give them a missing lens. It is the kind of capability serious frontier AI companies will eventually need in the same way they have eventually needed serious security teams, and the agent transition compresses the timeline considerably. Standing it up before the first wave of senior departures is much cheaper than standing it up after.
What this implies for how you build
I want to spend the rest of this essay on practical implications for founders willing to take agentic load seriously. This is the operational part. There are five things I think actually matter.
Treat the agent rollout as a redesign of the human operating system, not as a tooling decision. Most companies are introducing agents the way they introduced productivity software — procurement, training, adoption metrics. This is the wrong frame, because the unit of change is not the tool, it is the rhythm of the work. The right frame is closer to how serious companies treat a major reorganisation: someone whose job it is to model second-order effects, surface where the old recovery surfaces lived, and design the new ones intentionally before the rollout, not after the attrition. Doing this once, deliberately, is cheaper than absorbing the cost of not doing it.
Build cognitive load into the things you already track. Companies track compute, capital, runway, and velocity because those things show up in failure modes they have learned to fear. Cognitive load belongs on the same list, not as a wellbeing metric but as a leading indicator of decision quality. The questions that matter are concrete: how many parallel agent workflows is each senior person supervising, how many decisions are they responsible for in a week, how much of their time is spent in absorbed-attention work versus supervisory review, where is rework increasing, where is escalation becoming noisy, where are people losing the capacity to simplify. None of this requires the kind of intrusive measurement that breaks trust. Most of it can be self-reported in a paragraph a fortnight. The point is that the team has a shared language for distinguishing high performance from disguised overactivation, which is the distinction founders are typically worst at making in their own companies.
Design communication norms around the assumption that agents will inflate, not compress, message volume. The naive expectation is that agents reduce communication load by handling routine work. The actual experience is that agents produce structured outputs that humans then have to read, evaluate, and route. Without discipline, this becomes a steady increase in the cognitive cost of staying informed, and the organisation becomes increasingly literate and decreasingly clear. The companies that handle this well will move hard in the direction of written-first decisions, fewer meetings, sharper escalation channels, and agent summaries designed to reduce noise rather than create it. The norm-clarity research from software engineering teams suggests this kind of communication discipline is independently valuable for performance, which is a happy coincidence. The goal is not more communication. It is a higher signal-to-noise ratio than the team had before agents arrived.
Separate the metric of output from the metric of judgment. A predictable failure mode of agentic adoption is that performance evaluation drifts toward raw output, because output is easier to count and agents make output cheap. This is the mechanism by which agentic load becomes structural rather than incidental — once the metric rewards volume, every individual is correctly incentivised to push their agents harder, and the load becomes the equilibrium rather than the exception. The companies that hold the line will measure something closer to what they actually wanted measured before agents arrived: decision quality, judgment under ambiguity, the ability to simplify rather than generate, the ability to stop doing things that should not be done. This is harder. It also happens to be the only metric that survives contact with a workforce in which raw production is no longer the scarce input.
Build the organisational psychology function before the second wave of departures, not after. The function is not a wellbeing function and it is not coaching. It is the capacity to read the substrate of the company under load — to distinguish between a stressed individual operating below their best, a structural problem masquerading as an interpersonal one, and a stable pattern that will continue to compound until it is addressed directly. The work is closer to clinically informed organisational design than to anything HR currently delivers, and it sits most usefully alongside product, engineering, and operations rather than downstream of them. Companies that wait until the pattern is undeniable usually find that the people whose departure would have made the case have already left, and that the institutional memory required to design the new substrate well left with them.
The bet
I want to close honestly about what I am claiming and what I am not.
The strongest version of the argument is not that agentic AI is dangerous to humans, or that companies should slow their agent adoption to protect their teams. The leverage is real, the productivity gains are real, and the companies that hesitate on agentic adoption out of caution will lose to the companies that do not. What I am claiming is that those gains are conditional on a layer of organisational design that almost no one is doing yet, and that the cost of skipping that layer compounds quietly — distributed across the organisation, invisible on most dashboards, and visible only when senior people start leaving in a pattern the company struggles to explain.
I am presenting this as a strategic bet rather than as a proven forecast, and I want to be transparent about that. The peer-reviewed literature on human–agent teaming inside companies is thin, and what exists has not yet caught up with the cadence of the rollout. The argument is built by extension — from threat-rigidity research, from the team-science literature on regulation under load, from the venture-growth findings on rapid scaling, from what I have observed in companies trying this in earnest — and applied to a transition whose specifics are still moving. I think the synthesis is solid. The agentic-AI-specific evidence will take years to develop, and may revise parts of this picture. I have tried to be careful not to claim more certainty than I have.
The companies that take this seriously will look, from the outside, like they are absorbing agentic capability without losing the people who knew how to use it. The ones that do not will look like the pattern already starting to appear: rapid early productivity gains, a quiet attrition of senior people around the eighteen-month mark, and a slow drift from building the future to managing the consequences of a workforce that nobody designed for the rhythm it is now operating under. The next phase of this industry will not be won only by companies with the best models, or the best agents, or the most aggressive automation strategies. It will be won by the companies that can combine machine speed with human clarity — and in a world where output is increasingly abundant, that combination is the advantage that matters most.
Treating agentic load as a design problem rather than as a wellbeing problem is not the kind of decision that makes a company slower. It is the decision that determines whether the speed agents unlock is intelligent speed or merely fast speed. That distinction — between speed and intelligent speed — is, I think, where the next phase of this industry will be won and lost.