Analysis Giulia Paggiola Analysis Giulia Paggiola

The risk in having too many risks…

Confusing failure modes for risks is one of the most common structural mistakes in medical device risk analysis — and one of the most costly to fix later. This article explains the difference between hazards, hazardous situations, and harm under ISO 14971, why a bloated risk analysis undermines your whole risk management process, and how one simple syntax rule can help you build a cleaner, more actionable document from the start.

Do you have more than 40 risks in your device risk analysis — and it's not even invasive?

Most likely, they are not risks. They are failure modes. And confusing the two is one of the most common — and costly — mistakes I see in early-stage medtech.

A risk list that has grown out of control creates real problems:

It dilutes focus away from the risks that actually matter — the ones you should be able to recite off the top of your head. It opens the door to inconsistencies and duplication in a document so large that no colleague will review it in detail, but that an auditor will flag immediately. It turns every product feature into a hazard or risk control, which then warrants stricter testing requirements down the line. And it makes traceability in post-market surveillance and clinical evaluation a genuine operational nightmare.

I've seen many well-meaning startups suffer through the consequences of a badly designed risk analysis. The QARA who built it might feel proud of its thoroughness. But the rest of the team loses interest and never truly owns their risk areas. Management stops using it for decision-making. Product design becomes cluttered with risk controls — warnings, untouchable features — that nobody can explain.

This kills the collaborative and iterative spirit that is essential for good risk management.

So what's the difference between a risk and a failure mode?

A risk analysis table is built from three distinct layers, as described in ISO 14971 and ISO 24971:

  • Hazard categories — the nature of the potential harm (energy, software, misuse — full list in ISO 24971)

  • Hazardous situations — the circumstances in which people are exposed to a hazard, including failure modes and external causes

  • Harm — the actual injury or damage to health that may result

The most common mistake is conflating hazards with hazardous situations — that is, treating failure modes as if they were risks in their own right. The terminology doesn't help, admittedly.

One simple strategy to keep your risk analysis clean

Use a fixed syntax to write your risks consistently. Here's one I find practical:

THERE IS A RISK OF [who] [hazard type faced] ORIGINATING FROM [list of failures and hazardous situations] WHICH MAY LEAD TO [harm type — pick only the highest level]

Two examples:

For a hardware device: There is a risk of the patient coming into contact with high voltage (electrical energy), originating from a) damage to the connecting cable, b) manufacturing defect, c) poorly designed insulation — which may lead to electric shock.

For a SaMD: There is a risk of the physician receiving inaccurate output from the device (incorrect medical decision), originating from a) algorithm design limitations, b) algorithm execution error, c) user interface failure, d) cybersecurity attack, e) unclear instructions for use — which may lead to delay in treatment.

Notice how multiple failure modes collapse into a single, well-defined risk. That's the point. Your risk analysis becomes shorter, more focused, and far easier to maintain over time.

If you're building the table manually, write the syntax in your header row. If you're using AI-assisted tools, enter it as a prompt constraint or use it to validate the output. If you're reviewing an existing table, run each row against it.

A risk analysis should be accessible to the whole team, actionable in decision-making, and sustainable as the product evolves. Getting the structure right from the start is one of the highest-leverage things a QARA can do in an early-stage company.

Deep Dive: Getting the Structure Right

Risk Analysis vs FMEA

Both are part of the Risk Management process under ISO 14971, but they serve different purposes and are not interchangeable.

Risk Analysis is mandatory. It is the top-level document that captures your device's safety profile — the full picture of what could go wrong, for whom, and with what consequences. Think of it as the billboard for your device's safety. It needs to tell a meaningful story, not overwhelm the reader with noise.

FMEA (Failure Mode and Effects Analysis) is a supporting analytical method — good practice, and often expected by auditors, but not explicitly required by ISO 14971 as a named technique. It is the drill-down tool: you take each component, subsystem, or process and ask systematically, how could this fail, and what would the effect be?

The same FMEA logic appears under different names depending on the domain:

  • In SaMD, it is often formalised as a Software Hazard Analysis (required under IEC 62304 as part of software risk management)

  • In usability engineering, it underpins the Use-Related Risk Analysis (URRA), which traces use errors and abnormal use to potential harm — a core deliverable under IEC 62366-1

  • In cybersecurity, it is effectively a vulnerability analysis or threat modelling exercise (with reference to MDCG 2019-16 and IMDRF guidance on cybersecurity)

Each of these domain-specific analyses follows the same logic: identify how something could fail, then trace that failure to a potential harm. The outputs of all of them feed into one Risk Analysis for your product — not multiple separate risk documents.

This is where the structural confusion often starts. Teams run an FMEA, a URRA, and a software hazard analysis, and then copy the failure modes directly into the Risk Analysis table. The result is a document that mixes hazards, hazardous situations, and failure modes in the same column, under the label "risk." Multiply that across a product with many subsystems, and you quickly reach 60, 80, or 100+ rows — most of which are not risks at all.

The three-layer structure

ISO 14971 and its companion standard ISO 24971 are clear on the terminology, even if teams frequently blur the distinctions in practice:

  • Hazard: a potential source of harm — an inherent property of the device or its environment (e.g. electrical energy, ionising radiation, software decision output)

  • Hazardous situation: the circumstance in which a person is exposed to a hazard — this is where failure modes, use errors, and external conditions live

  • Harm: the physical injury or damage to health or property that results

A well-structured risk analysis row moves through all three layers. The failure modes — however many there are — belong in the hazardous situation column, not in a row of their own. That single structural choice is what keeps the document manageable.

A note on harm classification

For the harm column, the IMDRF Adverse Event Terminology provides a standardised, hierarchical coding system that is increasingly expected in technical documentation and is directly useful in post-market surveillance reporting. Using it consistently from the start — rather than free-text descriptions — saves significant effort later when feeding into your PMSR or PSUR.

Practical checklist

  • Can every row in your table be read using the [who / hazard type / originating from / harm] syntax? If not, it may be a failure mode, not a risk.

  • Are failure modes consolidated under their parent risk, rather than listed as standalone rows?

  • Is the harm column using consistent, ideally IMDRF-aligned terminology?

  • Could a new team member read the Risk Analysis and understand the device's core safety story in under 30 minutes?

References

  • ISO 14971:2019 and ISO 24971:2022 — available at a significantly lower cost than ISO directly via the Estonian Standards store (legitimate national standards body, same official text)

  • IMDRF Adverse Event Terminology browser — for standardised harm classification

  • IEC 62304 (software lifecycle) and IEC 62366-1 (usability engineering) — for domain-specific hazard analysis requirements that feed into the Risk Analysis

    Methodology note: This article is based on two original LinkedIn posts (first, second) written by me, reflecting my professional experience and personal perspectives on risk management in medical device development. Claude AI assisted in combining and expanding the posts into a broader article for this blog, integrating background context, regulatory references, and a structured Deep Dive section. All regulatory perspectives and practical recommendations are my own, and all content has been reviewed by me for accuracy.

Read More