For most payer organizations, risk adjustment workflows were built around one core assumption.
Coding happens after the fact.
Charts are retrieved weeks or months after an encounter. Coding teams review documentation. Diagnoses are captured. Additional passes attempt to identify anything that may have been missed. Validation layers are added to ensure compliance and audit readiness.
On paper, the process appears thorough and controlled.
In practice, it often becomes a cycle of recovery.
Teams work to find what was missed, reconstruct context, and close gaps long after the original encounter has occurred. Over time, this introduces additional cost, longer timelines, and increased operational complexity.
What is less frequently examined is the role that timing plays in all of this.
The Workflow Most Risk Adjustment Teams Operate Within
Retrospective coding remains the dominant model across payer organizations.
The structure is familiar:
- Charts are requested from providers
- Records are retrieved and aggregated
- Coders review documentation
- HCCs are captured
- Validation and audit processes follow
This approach works, but it works late.
By the time coding begins, several constraints are already in place.
Documentation context is no longer fresh. If clarification is needed, it is difficult to obtain. Any gaps tied to the original encounter have already passed through the care cycle. Coding teams are left working from what is available, not what could have been captured.
As a result, the process becomes dependent on multiple downstream layers.
Additional chart reviews. Secondary sweeps. Quality control passes.
Each layer is designed to improve accuracy, but each also adds time, cost, and variability.
Where Timing Begins to Influence Outcomes
In risk adjustment, accuracy is always a priority.
But when accuracy is achieved matters more than most workflows account for.
When coding occurs months after the encounter:
- Some diagnoses are more difficult to validate
- Supporting context may be incomplete or harder to interpret
- Missed codes are less likely to be recoverable
- Audit defensibility becomes more complex
Even when retrospective processes are effective, they often require more effort to produce the same result.
Timing does not eliminate accuracy. It affects how efficiently and consistently that accuracy can be achieved.
The Structural Difference: Retrospective vs Concurrent Coding
Concurrent coding introduces a different approach.
Instead of waiting for delayed chart retrieval cycles, records are accessed closer to the time of the encounter. Coding begins while documentation is still recent and more complete.
This is not simply a faster version of the same workflow.
It changes how the workflow behaves.
When records are available earlier:
- Coding decisions are made with better context
- Data completeness is easier to evaluate
- Gaps are identified while they are still visible
- Fewer downstream recovery cycles are required
The work shifts from reconstructing information to working with it while it is still intact.
Why Early Accuracy Changes the Equation
There is a common assumption that as long as codes are eventually captured, the timing does not materially affect outcomes.
In practice, it does.
Accuracy achieved earlier in the process tends to be:
- More complete
- Easier to validate
- Less dependent on multiple review layers
- More consistent across large populations
It also reduces the need for repeated chart access and reprocessing, which can be one of the more resource-intensive aspects of risk adjustment operations.
Late accuracy can still contribute to financial performance, but it typically requires more effort to achieve and may not capture the same level of completeness.
Why Retrospective Models Continue to Expand
Most organizations respond to gaps in performance by adding additional layers.
More chart retrieval. More review passes. More validation steps.
These adjustments are logical within a retrospective model.
But they tend to increase operational load without addressing the underlying timing constraint.
The result is a system that becomes more complex over time.
More people. More steps. More cost.
Yet the core dependency remains unchanged.
Coding still begins after the encounter.
What This Suggests About Workflow Design
For risk coding leaders, the question is not whether retrospective workflows can be improved.
They can.
The more important question is whether the structure of the workflow itself is limiting performance.
When visibility into clinical data occurs later in the process, the organization is inherently operating in a recovery model.
When visibility occurs earlier, the workflow begins to shift toward completeness at the outset rather than correction later.
This distinction affects not just efficiency, but predictability.
A Different Way to Evaluate Performance
Most risk adjustment programs measure success through outputs.
- RAF capture
- Coding accuracy
- Audit results
These are important.
But they do not fully explain how those results are achieved.
A useful additional lens is operational effort.
- How many passes are required to reach accuracy
- How much rework is needed
- How dependent the process is on recovery cycles
In many cases, timing plays a significant role in shaping those answers.
The Underlying Question
As a Director of Risk Adjustment, the challenge is not just capturing diagnoses.
It is doing so in a way that is scalable, defensible, and operationally sustainable.
Which leads to a different question.
Not just: How do we find more codes?
But: What would change if coding began earlier, while the documentation was still easier to interpret and validate?
That shift in timing does not replace accuracy as a goal.
It changes how efficiently that accuracy can be achieved.
And in many workflows, that difference becomes more significant over time.
