Monday, September 11, 2017

Digital Health Companies have to be careful, otherwise they could spoil the game for everyone


NOTE: It has been a busy few weeks and it is going to be a busy few months ahead. So, my blogging frequency is going to be erratic at best. It is a shame, because I wanted to write on the Prostate Awareness Month and then on Breast Cancer Awareness in October. However, Maker Faire Exhibits, Conference Presentations, Panels and Judging lie ahead for me, so please excuse the varied publishing schedule.

SyncThink

MedCity Newes reported on a warning letter from the FDA to a burgeoning company, SyncThink and they seemed to have responded promptly, as one would hope. However, the tailwinds still leave behind a trail of caution for companies going forward. Here's a quote from the MedCity Article:

SyncThink’s device Eye-Sync was cleared as a prescription device for the purpose of recording, viewing, and analyzing eye movements to help identify visual tracking impairment in human subjects, according to the letter.

However, it appears their overzealous marketing folks decided to go ahead and bump up the device to include concussion detection as a "feature". Why not? What with studies about the NFL, and various sports agencies deliberately ignoring issues have rendered many sportspeople with permanent ailments. Concussions also happen due to accidents, falls, etc., and thus the available market obviously spans a wide range of ages and gender is no bar. And yes, they can be life threatening. All this, of course, does not mean that you need to put the proverbial cart in front of the horse.

SyncThink of course claims it removed the materials that offended the FDA immediately from all of its marketing mechanisms.

Good, Right? Well...

A Trail of Bad and Unacceptable Behavior

The history of Digital Health is rife with examples of bad behavior.

Let's start with Not-AI/Maybe-AI/Most-Likely-ML

Actually this behavior spans all aspects of Healthcare, including, IBM Watson, according to a recent STAT article - promoted as a cancer therapy recommendation system, and sold prematurely resulting in a number of problems. This is something I have suspected for a while now, and it has been confirmed. Jumping the gun, declaring poor, dysfunctional Machine Learning products, worse than what a teenager can do with TensorFlow has become the order of the day for some companies. And to see IBM, glorified by Satell in one of the iconic books on Innovation do this is just sad. It should make you wonder about others.

Real AI is really not here. Excuse the pun. Google and IBM have friendly monsters that do a great job at Deep Learning. The actual independent decision making that would be the functional result of actual intelligence (not algorithmic analysis output), simply does not exist yet, at least in the public eye. And most of what you hear, is really just Machine Learning. More on this in a few blog posts before my December talk at the San Jose BIOMEDevice talk on AI in Medical Devices.

The problem with packaging stuff as AI?

It will always be the same. Overpromise, under-deliver, dilute your brand AND make it worse for the people that follow you. Just read my blog post on renal denervation, when I complete it and get it out later this week. The World Faires saw fewer contortionists in all of their existence, compared to people still, desperately trying to peddle renal denervation.

And while Scott Gottlieb wants to make the FDA business friendly and what not, two things to keep in mind - the FDA was created to avoid the kind of mortality and morbidity seen in Europe because all kinds of snake-oil merchants walked in and did whatever they wanted, and second, Europe, after about 50 years or so, has finally decided to take a more FDA-like approach.

So lying, fluffing or overstating, will get you trapped somewhere with regulatory problems. And, then, there will be the face-saving that all the other regulatory agencies will do by trying to get in on the game: bashing you.

My recommendation: Just don't do it.

But, let's go through one more example, before we get back to the recommendation that you have, hopefully heard since you were a child.

23-and-you-maybe, but not me!

Years ago, when 23andme came out, I was very irritated and wrote unequivocally about how they were going to make things hard for everyone. Why? All they could do was process your DNA. However, they marketed it in a way to make gullible people believe that as long as you joined the mania, and got your DNA mapped, suddenly a cure for just about anything to be found. Again marketing, with a serious disregard of facts. I am sure at least initially, they were not able to meet their price point profitably (this is just an assumption I make), and were hoping to ride the hype wave of people thinking suddenly, all they needed was Microsoft Excel (or Google Drive, given the company's origins) and compare their DNA with a bunch of friends - and lo!, the answer to everyone's problems would be laid bare with genetic certainty. Well...

Back then (and even now, and for at least a handful of years to come), even confirming a Single Nucleotide Polymorphism (SNP), where a single pair of nucleotides is off its hinges on the DNA ladder, took months to years to confirm. So, the company was essentially planning to capitalize on the desperation of the sick. Well, long story short, it took YEARS for the FDA to tell the company to go pound sand (too bad, neither the FDA, nor I are allowed to use more colorful language).

The company shockingly enough first tried to push back, and eventually gave in. Remarkable. And along the way, they slowed things down for everyone else, who could have behaved themselves, and gradually introduced a usable product, legally, when its time actually came.

My Recommendations:

1. The Valley (there really is only one here, contextually) is full of "people", who think the rule of law exists for others. They quaintly call such behavior: DISRUPTION. Ask Uber how that is working out. You might tell me, well, AirBnB broke the rules and bullied everyone, and got away with it. I ask you, what is likely: will you be the Uber of Concussion Detection or the AirBnB of it?

2. Regulations exist for a reason. No, not that you may flout them and find your own Lucy Koh (of the Apple/Samsung fame), sitting on the Federal Bench, just waiting to bite off more than he or she can chew! Again, your business plan can't be, "let's burn the bridge and when we need to cross one, get acquired".

3. Don't ruin it for others. When you flout regulations, or put out a terrible, immature product (read about what Medtronic did with Renal Denervation), you screw over quite a number of companies. People lose jobs, patients don't get the care. Do you really want to be the geniuses that got everyone in that swamp?

4. Following the rules is actually not that hard. Hire people with experience doing this. Go to RAPS, or the ASQ and look up good people. And I got good news for you: "Talented people in medical devices actually get paid terribly". Compared to the so-so CS grad you hired, only to have to give him bags of money, feed him and do his laundry, so he may write misogynist memos ruining your already flaky reputation because you funded 23andme!

5. Regulatory Agencies are almost always behind the curve. If you doubt me, go ask a Software Quality Engineer, what it takes to provide a software-update on an FDA approved product. But they learn. In the meanwhile, do not try to hoodwink them because they don't understand the gibber jabber of AI and so on and so forth. These are really smart people, and prone to anger (if you had to read and interpret text only pdfs all your life - and NO, your ML algorithm cannot do a better job, you'd be on the edge too). Play nice. Explain yourself to them. Show them your intentions are good. And work with them.

Doing all of the things I have stated are not going to earn you brownie points. You are expected to do them. There is a lot of change ahead with Digital Health, 3D Printing, 4D Printing (yes, it exists, mostly in labs), ML/DL/AI, IIoT and who knows what. It is important that when applying these technologies and paradigms to healthcare, good judgement, patience and care is essential.

Subscribe and Support, Please!

Did you enjoy this post? Please subscribe for more updates, using the sidebar. Have ideas or blog posts you'd like to see here? Contact me at yamanoor at gmail dot com.

Reference:

1. On SyncThink: http://medcitynews.com/2017/09/fda-slaps-syncthink-warning-letter-concussion-detection-claims/?_hsenc=p2ANqtz-8nrGCq5sErFwlSoErAZCVWLTeQMaeWZlOm465jisv-uu2z4yqUt-sfcRu-YPPwD_LNPpWqAHnDZ-vxrsb-YPv7d0K6xA&_hsmi=56142139


2. IBM's Watson on STAT: https://www.statnews.com/2017/09/05/watson-ibm-cancer/?utm_source=STAT+Newsletters&utm_campaign=1793b79d98-Weekend_Reads&utm_medium=email&utm_term=0_8cab1d7961-1793b79d98-150030301

3. Image, Courtesy Pexels: https://www.pexels.com/photo/black-and-blue-electronic-tools-on-green-circuit-board-39290/

No comments: