Blog
What real traceability requires: why linking artifacts is not enough
Most programs that describe themselves as having traceability are tracking links. That is not the same thing. A record of what was intended when the link was created is very different from a system that reacts when the design moves.
What a coverage matrix actually tells you
Six weeks from PDR. The requirements coverage matrix shows 100%. Every system requirement is linked to a subsystem requirement, which is linked to a design element, which has a stated verification method. The matrix is complete and the program is on track.
A systems engineer runs a quick check before the review. A thermal requirement linked to an instrument interface has not been updated to reflect an analysis revised three weeks ago. The operating temperature constraint changed. The link still points to the original design element. The verification method still references the original limit.
The link exists. The link is stale. The program does not know.
Why this pattern is so common
Traceability in most programs is implemented as a compliance exercise. The question being answered is: does a link exist between this requirement and a downstream artifact? Creating the link is a one-time action taken during the requirements baseline process. It gets recorded, reviewed, and closed.
What the link does not do is monitor the validity of the connection it represents. When the upstream requirement changes, the link does not flag itself for review. When the downstream design element evolves, the link does not propagate a notification upstream. The traceability database holds a snapshot of intent from the moment the link was created.
This is not a tool failure. It is a structural limitation of traceability implemented as a documentation task rather than an engineering mechanism.
What it costs
The cost of stale traceability is not visible until it is too late to fix cheaply.
A complete coverage matrix creates the impression that the design is under control. Programs make resource and schedule decisions based on this confidence. When the matrix is complete but the links are stale, the confidence is not earned.
Test cases are written to requirements as they were recorded, not as they currently stand. A test that passes against a superseded requirement is not evidence of compliance. It is evidence that the test was written to the wrong target.
Requirements conflicts that would have been visible at design reviews, if traceability had surfaced the impact, are instead found in integration or test, where the cost to fix them is an order of magnitude higher.
There is also the silent assumption. Engineers working downstream often assume that if a requirement affecting their work had changed, they would have been notified. In most programs, that assumption is wrong. The notification depends on someone remembering to tell them.
What real traceability requires
Real traceability is impact-aware. It does not just record that a connection exists. It actively surfaces when the connection needs to be revisited.
This requires two properties that most traceability implementations do not have.
The first is bidirectional reactivity. A change upstream propagates a signal downstream: the artifacts linked to this requirement may no longer be valid. A change downstream propagates a signal upstream: the requirement may no longer describe what the system actually does. Both directions matter.
The second is connection to the live design. Traceability links that point to design elements in documents, rather than to elements in a shared model that reflects the current state of the design, will always lag reality. The connection is only as current as the last time someone manually updated it.
When both properties are present, the behavior changes. The coverage matrix does not just show whether links exist. It shows whether the links are still valid given the current state of the system. That is the distinction between a compliance record and an engineering tool.
The one test worth running
Pick one upstream requirement that changed in the last month on your current program.
Ask: how many of the artifacts linked to that requirement received an automatic notification that they may need to be reviewed?
If the answer is zero, your traceability system is a record of what was true when the link was created. That is useful for an audit. It is not useful for managing a design that is still changing.
The programs that catch requirements conflicts in design rather than in test are not doing more reviews or writing better requirements. They have a traceability architecture that surfaces the impact of change, rather than one that records the existence of a link and moves on.