Blog
What a real digital thread requires in hardware engineering
The same number lives in six places in most hardware engineering programs. The problem is not that it is duplicated. The problem is that nobody knows which copies are current.
Where the duplication comes from
A systems engineer needs the current dry mass for a payload subsystem. They check the master mass budget spreadsheet, but are not sure whether the version they have locally is the latest, because it was updated two weeks ago and they are not certain the update reached them. They ask the subsystem lead, who confirms the mass but gives a different number. They find a third figure in the last formal review presentation. There is a fourth in the interface control document and a fifth in the test plan.
They spend forty minutes resolving a question they have asked before.
This is not an unusual situation. In hardware engineering, critical parameters, mass, power consumption, thermal dissipation, interface dimensions, operating voltage, exist simultaneously in multiple documents, each maintained by a different person on a different schedule. Keeping them consistent is manual work, and it is work that no single person owns explicitly.
The duplication was not a deliberate choice. It is the natural output of an engineering process built from tools that were not designed to share data with each other.
Why integration was left to people
Hardware engineering tools were built for specific tasks. CAD tools for geometry. Analysis tools for simulation. Requirements tools for compliance. Documents for communication and record-keeping. Each tool does its job well. None of them were designed to feed a shared data model that other tools read from.
The result is that every document is an island. A mass budget spreadsheet holds numbers. An ICD holds interface definitions. A test plan holds verification methods. The same physical property appears in all three because each document needs it for a different purpose, and because there is no mechanism to ensure they all read from the same source, they are maintained independently and they drift.
Keeping them consistent requires a person to notice when one source changes and manually update the others. In a program with ten critical parameters, fifty documents, and three suppliers, that is a full-time job that nobody has been formally assigned.
What a digital thread actually requires
A digital thread is often described as connecting the tools in an engineering program. That framing is not wrong, but it understates the structural requirement.
Surface-level tool integration, APIs that push data between applications and dashboards that pull from multiple sources, reduces friction but does not solve the problem. If the mass budget and the ICD are connected by an integration that copies a value from one to the other, the value is still duplicated. It is just duplicated faster. The integration breaks, or someone edits the copy directly, and the connection is gone.
A real digital thread requires a shared data model that is the authoritative source for the parameters that appear in multiple places. Not a copy of the parameter in each tool. One instance, in one place, that all connected documents and analyses read from directly.
This changes the structure of the problem. When the mass changes in the model, the ICD and the test plan do not need to be updated. They read from the model. They are already current. When an engineer opens the mass budget, they know the number is current because the process of changing it requires updating the model, and everything reads from the model. The forty-minute search for the right answer becomes a ten-second lookup.
What changes when it works
The practical effect of a real digital thread is not efficiency in the abstract. It is the restoration of trust in the data.
In programs without a shared data model, engineers develop rational habits of verification: cross-checking numbers, asking colleagues, maintaining local copies of files they need to trust. These habits are expensive individually and collectively enormous. They are also a sign that the process has trained engineers not to trust what they are looking at.
When there is one source and everyone reads from it, that verification overhead disappears. Engineers move faster not because they are working harder but because they are not spending time confirming things that should not need confirming.
This also changes what is possible with automation. Automated checks, whether mass rollup validation, requirement impact assessment, or interface consistency verification, only work reliably if they are operating on a single consistent dataset. When the data lives in six places, automation produces six different answers and creates more work than it saves.
The ratio test
Pick one critical parameter in your current program, dry mass, power budget, operating temperature, interface voltage, whatever is most consequential right now.
Count how many places that parameter currently lives: spreadsheets, documents, analysis tools, interface definitions, review presentations.
Then ask: if that parameter changes today, how many of those instances update automatically, and how many require a person to do it manually?
The ratio of manual to automatic is a direct measure of how much of your digital thread is still missing. In most programs, the ratio is close to zero automatic and all manual. That is not a technology limitation. It is an architectural choice, and it is one that can be made differently.