Blog
The real cost of manual change propagation in complex engineering
When a parameter changes in a complex engineering program, the hard part is not updating the number. The hard part is everything that has to happen next.
The investigation that follows every change
A structural engineer on a launch vehicle program revises the mass estimate for a propulsion subsystem. The number goes up by four kilograms. They update the spreadsheet. Then they stop.
Because now they have to update the system-level mass budget, check whether the revised margin still meets the system requirement, notify the thermal team whose analysis was based on the previous mass, flag the interface control document that referenced the old configuration, assess whether the delta triggers a deviation from the launch vehicle interface specification, and update the review deck prepared last week.
None of that work is technically difficult. All of it takes time. And most of it depends on knowing what else referenced the original number, which lives at best in someone's memory, and at worst in nobody's.
The problem is not carelessness. The process has no mechanism for tracking what depends on what.
Why this keeps happening
Manual change propagation is the default because the tools were not designed to be connected. Each artifact, whether model, spreadsheet, document, or analysis script, was built to store information well. Not to react when another artifact changes.
The mass budget does not know it is referenced by the thermal model. The requirements tool does not know the ICD was updated to reflect a new configuration. The review deck does not know the system has moved since it was prepared. Integration between them is handled by people, through email, meetings, and manual cross-checks.
This is not a failure of process discipline. It is a structural property of how engineering tools were built, and no amount of rigor in the process makes up for a system with no connective tissue.
What it actually costs
The time lost to propagation is rarely tracked explicitly, which is part of why it persists. But the costs are real and specific.
In a ten-person engineering team, a significant parameter change takes two to four days to fully propagate through all connected artifacts, when it goes well and the right people are available to trace every dependency. In supplier-extended programs, the timeline is longer and the visibility is worse.
The harder cost is incomplete propagation. The engineer who made the original change has no reliable way to know whether everything that needed to be updated has been updated. The best they can do is check the obvious places, ask around, and proceed. Which means decisions made in the days after a change, before propagation is complete, may be based on a configuration that no longer exists.
Design reviews surface conflicts between documents that were updated independently after the same change. Test plans reference requirements that reflect a superseded configuration. A supplier responds to an interface definition that the prime has since revised, without knowing it was revised. These are not unusual events. They are the natural output of a process that treats propagation as a person's responsibility rather than a system's property.
What the alternative looks like
When a parameter changes in a connected engineering environment, the impact is traceable immediately. Not because someone ran it down manually, but because the model that holds the change also holds the relationships to everything that depends on it.
The work of the change and the work of assessing its reach happen at the same time, not sequentially. An engineer does not have to spend two days asking who else needs to know about this, because the answer is readable from the structure of the system.
This does not eliminate engineering judgment. It eliminates the investigation that precedes the judgment. The engineer still decides what to do about the impact. They just do not have to discover the impact first.
How to test your own process
Pick the last significant parameter change on your current program and ask three questions.
How long did it take for all affected artifacts to reflect the change, not the initial update but complete propagation? Who decided that propagation was complete, and on what basis? And is there any mechanism that would have surfaced an artifact that was missed?
If the answer to the second question is "we checked the obvious places and moved on," and the answer to the third is "not really," your propagation process is not a procedure. It is a bet. The bet usually pays off. When it does not, the cost shows up as a defect, a rework cycle, or a finding at the worst possible moment in the program.
The question worth asking is not whether you can afford to fix this. It is whether you can afford not to.