The statistic that roughly four out of five ICM transmissions are benign is a population-level figure. It means that across a large panel of ICM patients over a representative time window, approximately 80% of transmissions contain no findings that warrant clinical action. What it does not mean is that any given coordinator can reliably identify which 20% require action without actually reviewing the other 80%. The challenge of ICM telemetry is not that most transmissions are benign. The challenge is that benign and actionable transmissions often look similar until you have done the review.
Understanding the signal quality issues and artifact mechanisms that generate the bulk of that 80% makes it possible to build review workflows that spend less time on definitively benign items and more time on genuinely ambiguous ones.
An ICM records a far-field subcutaneous EGM from a position in the left parasternal chest. The electrode spacing is small compared to a surface ECG lead, and the signal represents electrical activity reaching the electrodes through variable tissue layers. Several physical and physiological sources contribute to a noisy baseline or spurious events:
ICM devices apply onboard artifact rejection algorithms that use signal consistency criteria to distinguish true cardiac rhythms from noise bursts. These include blanking periods after each sensed event, minimum signal amplitude thresholds, and morphology consistency checks that compare each cycle to a stored template. The algorithms are effective at rejecting obvious high-frequency noise but have known limitations with the subtler artifact sources described above.
Artifact filtering in an analytics layer like Implansense operates differently from onboard device filtering. Rather than rejecting signals before classification, it analyzes the classified output — the episode reports and EGM strips that the device has already generated — and applies a retrospective plausibility assessment. For a pause flag, the filtering module examines the pre-pause and post-pause EGM morphology to assess whether the pause is preceded by a signal amplitude change consistent with progressive undersensing rather than true sinus arrest. For a VT flag, the module evaluates the morphology consistency across the flagged beats and compares against the stored sinus template to estimate artifact probability.
The output is not a binary artifact/not-artifact determination but a probability score. A 0.87 artifact probability on a VT flag means the morphology features are strongly consistent with myopotential noise. A 0.34 artifact probability on the same flag means the features are ambiguous and EGM review by a coordinator is warranted. This allows the review workflow to concentrate human attention on the ambiguous cases rather than requiring EGM review for every flagged episode.
Not all arrhythmia episode types have the same benign rate. In ICM telemetry, the approximate distribution of artifact or benign findings by episode type, based on published device validation data:
The highest-yield targets for artifact filtering investment are VT flags and pause flags, where the absolute false positive rates are highest and the clinical consequence of unnecessary escalation — urgent calls, expedited visits, or unnecessary workups — is greatest.
Artifact filtering improves specificity. It does not replace clinical judgment on ambiguous EGMs. A coordinator presented with a pause EGM that shows subtle baseline changes before the apparent pause still needs to evaluate whether those changes represent progressive undersensing or true sinus rate slowing before exit. Filtering narrows the cases requiring that judgment; it does not eliminate them.
The 4-in-5 benign figure is a useful way to frame the scope of the problem, but the clinical value of an analytics platform is measured by whether it correctly identifies which 1-in-5 transmission requires action — without missing the actionable findings in pursuit of specificity. An artifact filter that achieves 90% specificity by suppressing some true VT events has not solved the problem; it has created a different one. The design constraint is high specificity on benign classification with high sensitivity maintained for the actionable minority.