23 and me

I’ve followed the discussion on 23andme’s quarrel with the FDA with a lot of interest, and already had a heated argument with a friend about it. I realized that I had a jumble of disorganized ideas, possibly as a result of being the product of two different cultures: the medical establishment, and the Internet’s hacker ethos. I wrote this post to clear my mind.

Let’s start with “just the facts, ma’am”, and I’ll preface them by saying that I’m not a 23andme customer. I never found their offerings particularly compelling. Even worse, despite my training and professional affiliations, this is not my area of expertise, not by a long shot. So I’m an interested bystander, more than anything else. That said,

  1. 23andme offered a genetic test based on a swab
  2. The test was explicitly marketed as a “curio” of sorts: for entertainment purposes, and disavowing medical and diagnostic value explicitly
  3. The test consisted of a microarray test and a report based on the expression levels from that test
  4. The report consisted of a series of correlations between expression levels for the SNPs measured by the microarray and known phenotypes of interest: eye color, susceptibility to certain conditions, sensibility to certain pharmaceuticals among others.
  5. The FDA, after repeated attempts to work with 23andme, ordered them to stop marketing their services and basically shut down their only source of current income.

The FDA’s position

To me, the FDA seems to be claiming that 23andme is selling a diagnostic test and it is therefore subject to FDA jurisdiction. The FDA’s argument for this boils down to what programmers call “duck typing“. If it quacks like a diagnostic test, and waddles like a diagnostic test, it must be a diagnostic test, regardless of 23andme’s claims to the contrary.

The FDA is, to some extent, right. 23andme’s product certainly quacks like a duck. The technology is essentially the same as is used in some medical settings, except without the benefit of serious consistency checks, and without tailored, personalized counseling by a highly-trained professional.

Saying that “this isn’t a diagnostic test” doesn’t make it not a diagnostic test; especially when a lot of the value proposition seems to come from “learn things about your health and your genetic predisposition towards certain diseases.”  23andme, in other words, seems to have been saying “this isn’t a medical test” – wink, wink, nudge, nudge.

Do I believe that some people swab their cheeks and read genetic reports just for funsies? Yes, absolutely. I’m sure that there are lots of 23andme customers who are doing this for the entertainment value. Heck, I own a 3D printer, a LEGO robotics kit, and I write software for fun. I can’t fault anyone else for enjoying geeky stuff.

Do I think that most customers of 23andme do it for fun? I guess it’s possible, and I have no evidence to the contrary, but it fails my smell test. It seems unlikely.

The Internet

Here’s where the hacker ethos starts complaining.

First of all, 23andme is pretty clear on the fact that their services are not, should not be used for, and do not replace diagnostic tests. This is people’s own genetic information; the company is merely reading it for them, and providing some simple interpretation tools for fun, education, and self-knowledge.

Further -the argument goes- if anyone takes this seriously and, for example, gets a mastectomy because their risk of breast cancer was supposedly high, well, then they were morons, they deserve what they get, and anyway who in the medical community will be such an idiot as to perform the procedure without further confirmation?

The hacker also really wants to see his/her own source code.

The clinician

A long time ago, in a galaxy far, far away, I treated patients. Let me tackle the last question first: who’d be so stupid as to perform a somethingectomy based on a 23andme test?

Hopefully, no one.


Here’s how this scenario is likely to play out in real life, somewhere. I’ll use breast cancer as an example, mostly because it’s the most common example making the rounds, and also because preventive mastectomies in people with high genetic risks of cancer are real.

So say that Mrs. X comes into my office with an “educational” test showing a high risk for breast cancer. I obviously can’t ignore it. 1. it might be true that Mrs. X has a bad version of BRCA1; 2. Imagine that I downplay her concerns, enough that she drops it, then goes on to get cancer. I’ll probably end up on the receiving end of a lawsuit.

Doctors hate getting sued, and many are pathologically afraid of it. So the obvious thing here is to redo the test, for real this time, with a geneticist on hand and everything.

So say the test comes out positive. We handle it like any other positive test, and here the actual conduct will depend on family history, the patient’s risk tolerance, etc. This is relatively well-understood territory.

The problem is if the second one comes out negative. Who do we believe? Even worse, who does the patient believe? And what does the patient do with the fear that the first test was correct?

Some patients will not take no for an answer, and go from doctor to doctor, until someone, somewhere listens to their very real fear and performs the procedure the patient wants to feel safe. Or, alternatively, they’ll get tested over and over… until one test comes back positive, if only by pure chance.

Medical tests are Pandora’s boxes; cans of worms. Zymurgy’s law applies here.

That is why the FDA is uneasy about it.

So who is right?

You read the part about me just being a bystander?

It was serious. I have no idea. Some people have very passionate opinions, but I’m not sure that anyone knows what is the right way to go for society. This is very new, previously unexplored territory. Let me also say here that the hackers can handle this information, and much more, just fine.

The FDA is falling back on its very conservative instincts: protect people from harm, even if it comes from themselves. You could argue that the entire point of the FDA is to be paternalistic, by restricting access to substances and information that might cause harm when misused.

Widespread access to the information needed to make healthcare decisions is a relatively new phenomenon. Arguably, paternalism was necessary a few decades ago, to protect patients from the unknown -there was a lot of it- and from unscrupulous but authoritative-sounding pseudo-science and its practitioners who would harm them for personal gain. Nowadays, a lot of people don’t need that protection, and they don’t want it, either.

I suspect that the FDA will end up on the losing side here, sooner or later. The pendulum is swinging slowly but surely towards more patient involvement and autonomy. When this will happen, I don’t know. What price we will pay, I don’t know. It seems like a certainty that some people will be harmed along the way. How many? I don’t know, and my training thinks that one is too many. Will it be worth it? I hope so.


Further reading:


2 replies on “23 and me”

Very interesting! I remember years ago when fetal 3-D ultrasound appeared in the market. One of the companies that brought the machines to Chile decided to set up an exhibition and perform free ultrasounds on pregnant women in one of the main shopping malls here in Santiago. They published ads in the newspapers promoting their stall in the mall, and who knows how many ultrasounds were performed to satisfy the curiosity of unsuspecting women who could now see the face of their babies. And what if a baby had a gross anomaly? Medical technology applied to diagnosis cannot be a country fair attraction or put to use to satisfy the curiosity of people. An ethical analysis is lacking here because it seems that if something can be done always somebody will be willing to do it. When problems like these are seen under the ethical perspective the pieces start to fall in the places they should be and it doesn’t seem natural anymore to do whatever technology allows us to do.

Comments are closed.