PCCP beyond AI
Very exciting trend of femtech apps integrating with wearable data! How does this work for the regulated ones? I wanted to share this clever use of PCCP from Natural Cycles° from last year which impressed me.
What's PCCP?
Pre-determined Change Control Plan is a regulatory instrument devised by FDA - as a European is I'm most jealous of. It was designed to enable AI devices, which by design need to be able to evolve their accuracy in the field, getting smarter the more data they acquire. Traditionally, any change to the accuracy and performance of a device required a regulatory resubmission (still the case in EU) and up to 90 days of review wait.
With PCCP you can get pre-approval for a reasonable range of performance that you anticipate and accept.
What I found clever, is that Natural Cycles°, the pioneer of regulated fertility awareness, used PCCP not for AI changes but for variability of source data from different wearables.
While, as far as I'm aware, they currently integrate only with ŌURA and Apple Watch, this clears the way for them to swiftly add any more integrations to their conception/contraception suite as long as they fit their predefined specs (see table in pdf).
This is an example of how:
1️⃣ Regulatory instruments that are smart and abreast with the times enable even more innovation than what they primarily intended to,
2️⃣ Femtech is riding the wave of biomarkers ensuring most users can be served irrespective of which devices they choose - it's not just the iOS vs Android divide anymore!
3️⃣ Scientific research and clinical partnerships will see an incredible boost of opportunity from all this data, finally compensating for the lack of data that we know womens health has suffered until now!
What else could we use PCCP for? And until when can we have a similar toolkit in Europe under MDR? 🫠
NC's current integrations here
Link to full 510k summary here
LLM for Quality tasks
A short story on using AI for a QARA task and coming up with a framework for doing it faster (4h down to 1h) while keeping it under control.
Task at hand:
Client received its inspection report from the authority via the post in the national language and needed it digitalised and in English in order to action it.
1️⃣ Convert scanned pdf to electronic document
ChatGPT 👎 didn’t identify text in the scanned pdf.
Gemini and NotebookLM did it, but I was unconvinced by the accuracy 🧐 .
GoogleDrive did the job, uploaded the pdf and "Open as GoogleDoc". ✅
2️⃣ Translate electronic document
ChatGPT and Gemini kept hallucinating badly 😵💫 .
The "Translate document" function of GoogleDocs returned a poor literal translation 🥴 .
NotebookLM was accurate but skipped content 😥 .
Ended up doing section by section via Gemini's in-text "AI Refine" function with a very meticulous prompt and checking it manually in a side-by-side table 🥵 .
3️⃣ Format electronic document similar to the original
ChatGPT and NotebookLM didn’t work 🤕 .
Gemini could do some basic improvements via the in-text "AI Refine" function, but not via the GoogleDocs built-in "Ask Gemini" nor via the browser chat. Interesting how much these differ in capability.
In the end, the formatting fix was mostly manual 🤯 .
Conclusion:
After 4 miserable hours spent on the task with many failed attempts and much too manual input, I achieved a satisfactory document.
But, I still wanted to get to the bottom of this. There must be a better way??
So I restarted from scratch using a different approach, which I could summarise in a way that is inspired by the concept of the PDCA / Agile cycle we use in Quality:
⤵️ Plan: Ask AI for the right tools and prompts to achieve your goal. And importantly, "ask AI to ask you" questions or point out what is unclear in order to help you refine your requirements accurately.
▶️ Do: Approach it step by step. Run your refined prompt for your SUBtask in your selected tool. Quick review of the output, refine the prompt. Change tool if needed.
⏯️ Check: Get AI to verify its results and to help you check it manually by highlighting any discrepancies. For example, “juxtapose the original and translated content in a table section by section and note any discrepancies between the two version of the text”.
🔁 Act: Tell AI to correct the discrepancies, then re-run the verification step to update results.
Eventually, by doing it this way, I could achieve the same result in 1h and with increased confidence on the accuracy. Still not extremely fast, but considerably faster!
I am curious, how would others have approached this dull task?
EU AI Act deployment
Since August 2nd the EU AI Act is in force. But is it?
In practice: not much today, but the clock has started. If your device includes an AI component or uses AI to support decisions it’s time to take a closer look.
For high-risk systems, including many AI-based medical devices, there’s a 36-month transition to comply, i.e. phased implementation. However, some provisions apply earlier (e.g. banned uses of AI, codes of conduct).
Here’s what I see across medtech:
1. Confusion around scope and classification, e.g. AI as a tool for CSV or as part of the intended use?
2. Assumptions that MDR = AI Act compliance, thus reactive attitude to QMS updates upon NB feedback rather than in a proactive manner
3. Teams don't know how to resource it.
Good thing is that I also see a booming AI-related offering from QARA consultants and training providers which can help if you’re stuck on any of the above points. Cool examples (among many others):
• AI-first QARA frameworks and training e.g. Johner Institut GmbH https://lnkd.in/dBSuFfie,
• AI agents for compliance-checking and even FDA review outcome prediction such as Lexim AI or Acorn Compliance,
• GenAI embedded in eQMS tools such as Formwork from OpenRegulatory or Matrix One
What would help your team implementing the AI Act? Curious to hear your challenges and to help you with the right support.