Final FDA Guidance on Mobile Medical Apps
Today the FDA has issued the final guidance (PDF) on mobile medical apps, clarifying which apps likely are subject to FDA’s regulatory oversight and which are not. Here is a summary, kept as short as seemed reasonable, on the FDA’s 43 page long guidance paper.
Mobile medical apps subject to regulatory oversight
- Mobile apps that are an extension of medical device(s) by connecting to them.
(remote display of bedside monitoring, PACS viewer, insulin pump control, …)
- Mobile apps that transform the mobile platform into a regulated medical device by using attachments, display screens or sensors or by including functionalities similar to those of currently regulated medical devices.
(glucose strip reader, ECG signal display by attaching electrodes to the mobile device, monitoring sleep patterns via the built-in accelerometer, …)
- Mobile apps that become a regulated medical device (software) by performing patient-specific analysis and providing patient-specific diagnosis, or treatment recommendations.
(dosage plan calculator for radiation therapy, medical image processing software, …)
Mobile apps generally safe from regulatory oversight
- Mobile apps that provide or facilitate supplemental clinical care, by coaching or prompting, to help patients manage their health in their daily environment.
(medication reminders, nutrition plans, …)
- Mobile apps that provide patients with simple tools to organize and track their health information.
(blood pressure logging, medication intake time logging, …)
- Mobile apps that provide easy access to information related to patients’ health conditions or treatments.
(drug-drug interaction checker, best practice guidelines, …)
- Mobile apps that are specifically marketed to help patients document, show, or communicate to providers potential medical conditions.
(videoconferencing, photo sharing with a clinician, …)
- Mobile apps that perform simple calculations routinely used in clinical practice.
- Mobile apps that enable individuals to interact with PHR systems or EHR systems.
Mobile apps that are not considered medical devices
- Mobile apps that are intended to provide access to electronic “copies”.
(medical dictionaries, electronic textbook copies, translation, …)
- Mobile apps that are intended for health care providers to use as educational tools for medical training.
(medical flash cards, instructional videos, interactive quizzes, …)
- Mobile apps that are intended for general patient education and facilitate patient access to commonly used reference information.
(question asking, pill identification, medical facility finder, …)
- Mobile apps that automate general office operations in a health care setting.
(billing, shift management, bed space management, …)
- Mobile apps that are generic aids or general purpose products.
(use the phone as a magnifying glass (NOT specifically marketed for medical applications!), audio recording, …)
For those fellow medical app developers who might be affected by these guidelines it’s probably a good idea to take a look at the original publication. There are a quite many “may” and “at this time” in the document and apps might not be as safe from regulation as you might think from its categorization, and vice versa.
My take? Very reasonable.
Epic, the most powerful EHR vendor in the US, is touting their new open.epic movement. Information on what it is is scarce at the moment as all you can see at the above linked website is a cartoon, an email address and bad font kerning.
But from the cartoon one can readily infer what their “open” service seems to be:
A way to make self-tracking services funnel data into Epic’s EHR silo.
AMIA CRI Slides and Paper
For this year’s AMIA TBI-CRI joint summit I wrote a short technical paper on the creation of our Indivo iOS Framework, mentioning some implementation details and the problem of OAuth secrets on mobile devices. The accompanying presentation deviated a little bit and I focused more on Indivo, personally controlled health records and why mobile matters, not only for big companies but also disease registries and related health databases.
Pediatric Growth Charts on iOS
As part of my current fellowship I’m creating a pediatric growth charts app for iOS, which is coming along nicely. The app will be used to demonstrate EHR integration using SMART here at Boston Children’s Hospital and I also plan on collecting questionnaire-style feedback from frequent users of the app. More on this later, though.
As a sideproduct I will freely release the app to the App Store once its mature enough hoping it is useful to pediatricians and possibly even parents.
The main aspect of the app is to re-use existing growth chart PDFs and draw the measurements into those. I have the WHO and CDC charts already implemented and will add a few more. The LMS data tables are also integrated so the app can calculate the z-scores and percentiles for you.
Here’s a sneak peek at the overview screen. Tapping a chart will pull it up fullscreen, zoomable, printable, shareable.
On Implementing Medical Scores
Today I sat down to implement one of the often requested formulas for MedCalc, ISTH-DIC, which consist of two scores by The international society on Thrombosis and Haemostasis on detecting overt and non-overt disseminated intravascular coagulation.
It’s always good to see the scores publicly available and seemingly straightforward [PDF]. But as I started to implement the score for overt DIC, three all-too familiar problems surfaced:
Platelet count is defined as: >100 = 0; <100 = 1; <50 = 2
Well, 100 what? Kilogram? Sure, it’s obvious in this case and clear from the main publication that it has to be Giga per liter, but this is the official score reference, it must be specific about the units to use!
Less than vs. bigger than
For three of the four criteria, the dividing values are not defined:
- Platelet count: >100 = 0; <100 = 1; <50 = 2
It is not clear how many points should be used if the count is exactly 100
- Prothrombin time: < 3 sec.= 0; > 3 sec. but < 6 sec.= 1; > 6 sec. = 2
It is not clear how many points should be assigned for both 3 and 6 seconds
- Fibrinogen level: > 1.0 gram/l = 0; < 1.0 gram/l = 1
Again, what if the level is exactly 1.0?
This is very much a real problem, take prothrombin time for example. If I programmatically implement the scoring, there are two obvious ways:
- Test for > 6 seconds, then > 3 and then assign no points to the remainder:
3 seconds gets zero points.
- start by checking < 3, then < 6 and assign 2 points to the remainder:
3 seconds gets one point.
Both approaches are programmatically equally valid strategies, and both result in different values for the same input, just because the dividing value is not correctly defined.
Implement your scores!
The best way to ensure unambiguous medical scores is if the authors programmatically implement their score following their own definition. Not only does this ease implementing scores, it should also capture these edge-cases (which are not rare at all!) and ensure that scores are implemented correctly anywhere.
Note that this score is not the only one exhibiting issues like that, not even close!
I have asked the authors for clarification on these issues and will update this post once I get a reply.
I have received clarification from the authors, proposing to use ≤ wherever the score says <. Very well, that’s how I’m going to use it then.
ITdotHealth 2012 Conference
This week saw the second ITdotHealth conference taking place at Harvard Medical School. Focused on SMART and how to liberate data in the EHR world, the conference drew attendees from across the US and even Switzerland.
It was most encouraging seeing people from all the different corners of healthcare IT — hospital CIOs, physicians, EHR vendor CTOs, HIT researchers and government officials — to share the dream of accessible patient data. It seemed to me that everybody agreed it was time for medicine to take advantage of all the data currently locked away in hospital data silos, and the way to achieve this feat would be with SMART or technology just like it. The sugar coating probably was Clayton Christensen’s keynote where he laid out the theories from his book The Innovator’s Dilemma and their follow-ups in a convincing and inspiring talk.
Luckily for you, much more skilled writers than I’ll ever be have attended the conference and have already converted their impressions to binary data. One is a short recap of John Halamka’s insight by himself and the second a very nice roundup of insights gained at the conference by Andy Oram for O’Reilly. Absolutely worth a read!
The conference was recorded and the coverage material will be available soon; I’ll put up a link once it is.