Why Secure-Base Culture Is Performance Infrastructure for Frontier AI Teams

I spend a lot of my time thinking about how AI systems work, and how to make them work better. But the older I get, and the longer I run a company, the more I find myself convinced that the bottleneck on a frontier AI team is almost never the part you can measure on a dashboard. It is not compute, not benchmark scores, not the talent pool. Those things matter enormously, of course — without them, nothing else matters. But conditional on having them, the thing that determines whether a frontier AI team produces what it is capable of producing is something much more ordinary, and much harder to count: whether the people on it stay regulated enough, under pressure, to surface bad news early, ask for help without status games, disagree directly, and repair conflict before it metastasizes.

I want to be clear up front that this is not a wellbeing essay. I am not going to argue that founders should care about their teams' inner lives because it is the right thing to do, although I happen to believe it is. I am going to argue something narrower and, I think, more useful: that the social condition of a frontier AI team is part of its performance infrastructure in the same sense that compute and tooling are. The evidence base for this claim is stronger than most founders realize, and the implications are operational rather than therapeutic. If you are running a company in this industry, this is one of the cheaper, higher-leverage investments you can make, and almost no one is making it well.

The conversion problem

The dominant theory of organizational performance in frontier AI is something like this: hire the densest possible concentration of talent, give them resources, get out of the way, move fast, and trust that good outcomes follow. Culture, in this view, is a downstream output — what naturally emerges when smart people work alongside one another. The serious questions are about compute, model architecture, and recruiting. Everything else is HR.

I think this gets the conversion problem almost exactly backwards.

Talent is the input to organizational performance. The output — the thing that actually matters — is coordinated decisions made under pressure. The conversion rate between the two is determined by social processes that the organizational psychology literature has studied for decades and that most companies systematically under-invest in: how teams handle disagreement, how information flows when something is going wrong, how quickly people can repair after rupture, how candidly junior members can challenge senior ones. None of this is news to organizational researchers. What seems to be news, or at least under-applied, is the question of what produces those conditions in the first place — and what happens to a frontier AI team when they are absent.

Let me give a quick tour of what the evidence actually says, because I think a lot of founders dismiss this material on the assumption that it is soft, and it is in fact much more rigorous than they assume.

A 2012 meta-analysis of intrateam trust found that trust is positively related to team performance, and the relationship is stronger in contexts with high task interdependence and differentiated expertise — which is exactly the profile of a frontier AI team. Meta-analytic work on team communication finds that quality of communication matters more than frequency, and that information elaboration and knowledge sharing are particularly tied to performance. Amy Edmondson's work on psychological safety, replicated many times, finds that safety predicts unique variance in task performance and citizenship behavior beyond what other factors explain, with effects flowing through team learning and collective efficacy. In software engineering teams specifically, both psychological safety and norm clarity predict performance and satisfaction, with norm clarity often the stronger predictor.

The picture across these literatures is consistent. Teams that can speak truthfully, learn from error, coordinate clearly, and trust each other under interdependence outperform teams that cannot — and the gap widens as work becomes more uncertain and more consequential. That is not a soft finding. It is one of the more reliably replicated patterns in organizational science. The question I want to explore in this essay is what produces those conditions, why frontier AI work is unusually corrosive to them, and what founders can actually do about it.

Attachment as a self-regulation system

One of the better-developed answers to the question of what produces those conditions comes from adult attachment theory, which has had a quiet but rigorous decade-long renaissance in workplace research. A 2024 meta-analysis of 109 independent samples found that attachment style is meaningfully related to job performance, burnout, satisfaction, and organizational commitment, with trust in supervisors mediating part of the effect. A 2024 theoretical integration sharpened the mechanism: attachment matters at work because it shapes how people self-regulate under strain — how they appraise threat, seek support, manage impulses, and conserve regulatory resources when things get hard.

That last clause is the one founders should pay attention to, because it connects this literature directly to the operating conditions of frontier AI.

Acute stress reliably impairs working memory and cognitive flexibility. Time pressure degrades executive function. Perceived threat tends to produce a fairly specific cognitive profile that researchers call "threat-rigidity": narrowed attention, simplified information processing, centralized control, and reversion to familiar routines. These are not character flaws; they are how human nervous systems work under load. And frontier AI work generates these conditions structurally. People are making consequential decisions on insufficient sleep, with incomplete information, in conditions of rapid contextual change and very high felt stakes. Under that load, the question is not whether stress activates the nervous system. It will. The question is whether the team has the relational substrate to stay coordinated when it does.

This is what "secure-base conditions" describe. Someone operating from a secure base under pressure can stay regulated long enough to think clearly, ask for what they need without strategic maneuvering, trust that conflict is recoverable, and update on new information without it feeling like a loss of face. Insecure patterns produce the opposite — heightened threat response, defensive communication, difficulty repairing after rupture, a tendency to either over-pursue or withdraw. None of this is dysfunction in the dramatic sense that the word suggests. Most of it looks, on a given Tuesday, like ordinary professional behavior. That is exactly why it goes unnoticed and accumulates.

The translation founders need is, I think, this: attachment shapes how people behave when stress activates them, and stress activation is the operating condition of frontier AI. Secure-base conditions therefore function as upstream infrastructure for the team processes — voice, trust, communication quality, repair, learning — that the performance literature already identifies as central. They are not a wellness perk. They are the substrate that determines whether your other investments compound or leak.

What this looks like at work

The clearest signal that a team is operating from an insecure substrate is rarely a dramatic incident. It is a pattern of small things that, taken together, slowly degrade the team's capacity to think clearly under pressure. I will try to make these concrete, because I think founders often nod along to abstract claims about culture and then fail to recognize the same dynamics in their own teams.

A senior engineer who consistently doesn't ask for help when stuck. Risks that calcify before they get flagged. Feedback consistently received as a referendum on worth rather than as information. Velocity that drops in ways no retrospective surfaces, because the loss has been made invisible.

A founder who micromanages under stress, can't tolerate ambiguity in a direct report's loyalty, and turns ordinary disagreements into existential ones. The organization quietly learns to manage the founder's regulation rather than think clearly about the work. Critical information stops flowing upward — not because anyone decided it should, but because the cost of delivering it has gotten too high.

A leadership team that appears calm and decisive but quietly accumulates unresolved disagreements. Strategic decisions get made by who outlasts the silence rather than by who has the strongest argument. The team congratulates itself on its lack of drama and wonders why execution is slower than it should be.

A team with mixed insecure patterns — common in fast-growing startups — that produces chronic miscoordination, factional alliances, and a steady leak of senior talent who report on their way out that "the culture wasn't right" without being able to name what was actually wrong.

I want to emphasize that these are behaviors consistent with insecure coping under pressure, not clinical diagnoses. The pattern matters more than the label. What the literature supports is that these patterns are predictable, costly, and modifiable — and that they compound in environments of high interdependence and high stakes. Frontier AI is, by construction, such an environment.

The compounding cost of destructive patterns

Some patterns of insecure functioning are particularly costly at scale, and a serious organizational psychology function should be equipped to recognize them.

When a senior team member chronically externalizes blame, splits the team into camps, manipulates information flows, or cannot tolerate disagreement without escalation, the cost compounds quickly. Trust erodes. Information stops flowing. Other senior people start spending significant cognitive load managing the dynamic rather than doing their actual work. Junior members learn to route around the issue, which fractures the team's information topology and degrades decision quality across the organization. By the time the pattern is finally named clearly, significant damage has usually been done — often including the loss of senior people who quietly chose to leave rather than continue managing it.

Most leadership teams handle this poorly, and I include myself in this category in the past. They lack vocabulary for what is happening. They hesitate to address it, because the person in question is often genuinely capable in other respects and the organization has come to rely on them. They tend to treat the symptoms — a particular conflict, a particular departure, a particular escalation — rather than the underlying pattern. The cost is paid in slow, distributed degradation that rarely shows up as a single visible failure on any dashboard, but that anyone who has worked inside such an organization will recognize.

A mature organizational psychology function — and I think serious frontier AI companies will eventually need one, in the same way they have needed serious security teams — includes the capacity to recognize these patterns early, support the individuals involved with appropriate care, and help the organization respond before fracture. This is not about diagnosing people from the outside, which I think would be both ethically wrong and operationally useless. It is about having the clinical depth to distinguish between a stressed person operating below their best, a structural problem masquerading as an interpersonal one, and a stable pattern that will continue to compound until it is addressed directly. The three require very different responses, and confusing them is one of the more common ways organizations make these situations worse.

Why velocity, stakes, and talent density raise the cost

Three features of frontier AI work make the substrate more consequential than in most industries. I want to take them one by one, because the third in particular is counterintuitive and I think founders systematically miss it.

The first is velocity. Frontier AI moves faster than the human capacity to integrate what is happening. Velocity is regulation's natural enemy: the faster the work, the more decisions get made from default patterns rather than from considered judgment. A team that cannot repair quickly cannot operate at high speed for long without fracturing. There is a useful body of research on rapid scaling in startups — the recent venture-growth literature finds that rapid scaling is associated with higher employee burnout and lower job satisfaction, which is a structural cost of velocity that founders typically under-price. We focus on the visible costs of moving fast (technical debt, hiring mistakes) and miss the relational ones, which are larger.

The second is stakes. The work feels important — to the people doing it, to investors, to the public, and often to the people themselves in ways that exceed any reasonable proportion. High felt stakes activate the threat response more reliably than almost anything else. The same engineer who is calm and effective on a routine product can become reactive and defended on a frontier release. This is not a character flaw. It is how human nervous systems work under perceived threat, and the threat-rigidity research shows what follows: narrowed attention, simplified processing, centralized control, reversion to routines — exactly the cognitive profile you do not want when the work requires careful, ambiguous, high-context judgment. The work needs your team's best thinking precisely when the conditions of the work are pushing them away from it.

The third is talent density, and this is the one I think most founders get wrong. The standard assumption is that hiring exceptionally capable people solves coordination problems. Smart people figure it out. In practice, talent density raises the cost of poor regulation, because dependence on high-quality coordination rises with the capability of the people coordinating. Exceptionally capable people are often capable partly because they have developed strong, well-defended patterns for protecting their work, their judgment, and their sense of competence. Those defenses are valuable in some contexts and very expensive in collaborative ones. A room of brilliant operators who cannot disagree well or repair after rupture will produce less collective intelligence than a room of merely good operators who can. I have watched this happen, in my own company and in others, often enough to be confident in the claim.

The strongest defensible version of the talent-density argument is therefore not that secure-base conditions matter more than talent. Talent matters enormously, and I would not trade ours for anything. The argument is that in talent-dense, high-interdependence environments, secure-base conditions raise the conversion rate from talent to coordinated performance — and that conversion rate is where most organizational performance is actually won or lost.

What this implies for how you build

I want to spend the rest of this essay on practical implications for founders willing to take the substrate seriously. This is the operational part. There are five things I think actually matter.

Treat repair as a core capability. Most performance cultures optimize for the upswing — output, velocity, ambition. The teams that sustain performance over time also invest, deliberately, in the downswing: how the team handles failure, disagreement, and rupture. Repair capacity is the multiplier on every other capability you have, because it bounds the cost of disagreement and makes harder problems tractable. You cannot ask for the kind of direct, high-stakes feedback that frontier AI requires from people who have no faith that the relationship will survive it.

Treat founder regulation as infrastructure. The senior leadership of a company sets the regulatory baseline for everyone below them. The literature on startup CEO communication is consistent on this point: founder responsiveness, assertiveness, and authenticity shape downstream employee relational and behavioral outcomes, and there is emerging evidence that fear of failure can spread through the organization through emotional contagion. The most leveraged investment most AI founders could make in their company's performance is probably in their own regulation. I say this as someone who has had to learn it in real time, and who is still learning it.

Hire and promote for secure-base behaviors, not for inferred attachment styles. I would not recommend attachment-style screening as a selection tool. The evidence is not strong enough and it invites amateur diagnosis, which I think is dangerous. What is defensible is structured assessment of observable behaviors: how a candidate describes past conflicts, whether they can name their own contribution to ruptures, whether they can disagree directly in the interview without becoming activated, whether they ask for what they need cleanly, whether they update visibly when given new information. These behaviors are observable and they are reasonably stable predictors of how someone will function under team pressure.

Design communication norms that assume regulation is finite. Async-by-default, written-first, calibrated meeting rhythms — these are not productivity hacks. They reduce load on the part of the system most likely to fail under sustained pressure. The norm-clarity research from software teams suggests they are also independently valuable, which is a happy coincidence.

Distinguish secure-base culture from low-accountability culture. This is the most important boundary condition, and I want to spend a moment on it because I think it is the single point most likely to be misread in this essay. Psychological safety is sometimes romanticized as comfort. The literature is clear that it is not the same thing. Secure-base cultures are not low-standard cultures; they are cultures where high standards do not require interpersonal threat to enforce. Edmondson's distinction between psychological safety and accountability is the relevant frame: the goal is not to make work feel comfortable but to make it possible to think clearly while doing demanding work. If your culture has been engineered for comfort, you have built the wrong thing.

## The bet

I want to close honestly about what I am claiming and what I am not.

The strongest version of this argument is not that attachment quality replaces talent, strategy, capital, or technical execution. It does not. What I am claiming is that in highly interdependent AI teams working under sustained uncertainty, secure-base conditions are part of the performance infrastructure. They are what keeps people regulated enough to surface bad news, ask for help, disagree directly, and repair quickly. They do not replace the other inputs to performance, but they determine whether those inputs convert into fast, reliable coordination under pressure.

I am presenting this here as a strategic bet rather than as a proven forecast, and I want to be transparent about that. There is not yet a peer-reviewed literature comparing frontier AI teams specifically on attachment quality and performance outcomes. The argument is built by synthesis — from adult attachment research at work, from team science, from high-reliability operations, from software engineering teams, from venture-growth research — and applied to frontier AI as a critical application domain. I think the synthesis is solid. The frontier-AI-specific evidence will take years to develop, and may revise parts of this picture. I have tried to be careful not to claim more certainty than I have.

The companies that take this seriously will look, from the outside, like they are moving fast without breaking. The ones that do not will continue producing the pattern already visible across the industry — explosive early progress, predictable senior departures around the eighteen-month mark, and a quiet shift from building the future to managing the internal weather. I have watched enough cycles of this to be reasonably confident in the prediction.

Treating the substrate as infrastructure rather than as an HR problem is not the kind of decision that makes a company slower. It is the decision that determines whether speed remains intelligent under load. That distinction — between speed and intelligent speed — is, I increasingly think, where the next phase of this industry will be won and lost.

Previous
Previous

Why Agentic AI Will Make Organisational Psychology a Performance Function, Not a Wellbeing One

Next
Next

AI for Executives: How to stay relevant