Blog
Why engineering coordination relies on people, not process: and why that breaks at scale
Most engineering programs don't fail because the engineers lack skill. They fail because the system for coordinating engineering work is built around one or two people who know where everything is. Nobody noticed that was a structural problem until one of them left.
The concrete situation
Consider a satellite program at detailed design phase. Six subsystems, three suppliers, twelve engineers on the core team. The requirements live in a spreadsheet maintained by one systems engineer. Interface control documents are in SharePoint, versioned by filename. The mass budget is a separate Excel file, updated by a different person. Trade study results sit in a shared drive folder that half the team has never navigated.
Nobody is being careless. Each tool was a reasonable choice when it was adopted.
There is one person, typically the lead systems engineer, who holds the map in their head. They know which requirements were deferred last quarter, which interface was revised in the review on Thursday, which version of the mass budget reflects the current configuration. They did not set out to become the single point of coordination. They became it because no system was doing that job.
When they go on leave, decisions slow down. When they leave the company, the program spends weeks reconstructing what they knew.
This is an architecture problem, not a people problem.
Why this pattern persists
The reason it is so common is not negligence. Engineering programs are built to deliver hardware, not to design information infrastructure. In the early phases, informal coordination works. The team is small, the design is exploratory, and an experienced engineer can track most of it mentally. The tools are cheap and familiar, and the overhead of setting up something more structured feels disproportionate to the immediate problem.
The failure mode appears at the transition from concept to detailed design, or when the team grows beyond ten people, or when suppliers become part of the workflow. At those points, the number of dependencies between teams, tools, and documents grows faster than any individual can track.
New team members join without a mental model of where information lives. Decisions made verbally in meetings do not make it into any baseline. The design is no longer exploratory: every change has downstream consequences that need to be assessed, not just noted. What worked at the start of the program now requires a coordinator to function at all.
Because it worked before, the assumption is that it will continue to work, right up until it does not.
What it actually costs
These costs are not vague inefficiency. They show up as specific hours and specific delays.
A change to a system-level requirement takes two to three days to propagate through all connected documents in a typical program, when it goes well and the right person is available to trace all the dependencies manually. When that person is unavailable, the change waits.
Design reviews routinely surface conflicts between documents that should have been identical: an interface definition updated in one place but not another, a requirement revised verbally in a meeting that never made it into the baseline. These are not caught before the review. They are caught at the review, in front of the customer, at the worst possible moment for credibility and schedule.
Engineers in complex programs spend between one and three hours per week just locating the current version of an artifact they need. Not reading it or acting on it. Just finding it.
When a key coordinator leaves a program, the realistic estimate to reconstruct their working picture of the design state is four to six weeks. That is not a disruption. That is a schedule hit, and it is not on the risk register until it has already happened.
What the alternative looks like
In a well-run engineering environment, coordination is a property of the system, not a capability held by one person. When a requirement changes, the impact is immediately visible across all connected artifacts, not because someone manually updated each one, but because they are structurally connected.
The current state of the design does not need to be reconstructed from scattered files. It can be read. New team members can orient themselves without a week of hand-holding from the one engineer who knows where everything is. A change made on Monday is visible to the full team on Monday.
This is an architectural choice. The teams that achieve it made a deliberate decision, early, to treat their engineering data as a connected system rather than a collection of independent files that happen to describe the same program. The cost of making that decision early is low. The cost of not making it compounds with every week of the program.
A diagnostic you can run this week
Ask your team this question: if your lead systems engineer became unavailable starting tomorrow, how long would it take the rest of the team to answer these three questions without them?
1. What is the current approved baseline for your most critical system interface? 2. Which requirements were deferred in the last review cycle, and what is the current status of each? 3. Has the mass budget been updated to reflect the most recent change to any subsystem?
If the honest answer to each question is more than thirty minutes, your coordination infrastructure lives in a person, not in a system. That is a risk, and it does not get smaller as the program grows.