Tuesday, March 13, 2007

Letting Computers Decide Who Lives and Who Dies?

According to an article in the New Scientist, A simple formula can predict how people would want to be treated in dire medical situations as accurately as their loved ones can, say researchers. According to a study, surrogate decision makers were only 68% right when trying to predict what their loved one would want under hypothetical circumstances. But the computer program did better, getting it right about 78% of the time.

The formula is based on people, asked to imagine themselves in comas, wanting care if they have a 1% chance of recovering awareness. But using such a formula would be a case of garbage in, garbage out. Diagnoses of persistent vegetative state are notoriously inaccurate. Indeed, there are unexpected awakenings, of confusion between a "locked in state" (unable to communicate) and a persistent unconscious state (actually unaware).

More importantly, using such a computer model would not treat the patient as an individual. Even if the computer could predict in 78 out of 100 cases, that would still leave 22 people whose wishes would be violated.

Everyone should create an advance directive. Absent that, patients and society are much better served giving the benefit of doubt to life over death when we do not know what the patient would want. My friend Bobby Schindler had it right, when he told the New Scientist: "I believe it would be extremely irresponsible to allow machines to make decisions involving life and death," says Bobby Schindler, brother of Terri Schiavo. Schiavo was in a persistent vegetative state for 15 years until she died in 2005 after doctors removed her feeding tube. Her case sparked huge debate in the US. "If a person becomes incapacitated, is not dying, and can assimilate food and water via a feeding tube, then I believe that we are morally obligated to care for the person and provide them this basic care--regardless of a computer attempting to 'predict' what that person's wishes might be," Schindler adds. "Essentially, you would be allowing a machine to determine what is ethical, what is right and wrong, which no machine is able to do."

Labels:

5 Comments:

At March 13, 2007 , Blogger JacqueFromTexas said...

Damn.

 
At March 14, 2007 , Blogger Tim said...

Talk about setting the stage for socialized health care. The 1% criteria is the starting point, an "objective measure" despite, as you point out, an objective test. Budgets, resources, etc., will eventually be used to fine tune (increase) the cut-off by factoring in what is best for the patient and society. Just watch the next study show the exponential rise in cost and a decrease in quality of life as the treatment cut-off is lowered...

Yep, Bobby got it right.

 
At March 14, 2007 , Blogger Wesley J. Smith said...

Tim: Thanks for commenting. You nailed it. Bioethicists have already come up with a subjective quality of life test knwown as QALY. Indeed, it would be very easy to program the computer to apply it or some other form of subjective measurement as to whether the life was worth spending resources upon.

Of course, all of this takes away the professional nature of medicine.

 
At March 14, 2007 , Blogger James said...

What a disturbing article. It is really impossible to determine that you wouldn't want to live under a certain set of circumstances without actually being in the condition.

When most people are asked would want to live in state like Terri Schiavo or even asked would you want to live your life in wheelchair like a quad, 99% of people are going to say NO.

However, their decision may change if they find themselves to in those conditions.

Without something in writing or specific explicit instructions, food and water (feeding tube) should never be withheld from a person who isn't dying.

And even if they have given permission, some pause should be debated.

In a few days it will be the 2 year anniversary of Terri Schiavo's court ordered death by starvation and dehydration.

 
At March 14, 2007 , Blogger T E Fine said...

Kinda reminds me of Isaac Asmiov's short story "Franchise." One guy in all the US gets picked to vote for the rest of the country, only he doesn't make a deliberate choice for the president, he is just asked a series of questions and a computer takes his answers and extrapolates what the rest of the country would vote.

The guy's father-in-law complains about it - "What if I change my mind about how I would vote, or what if I would just vote differently for the hell of it?" That's just like this - what if a person says "Pull the plug" but changes his mind at the last minute, or remains conscious while his body can't move? He's totally disenfranchised - and we're letting that happen.

 

Post a Comment

Subscribe to Post Comments [Atom]

<< Home