An article in this month’s New England Journal of Medicine (NEJM) (http://www.nejm.org/doi/full/10.1056/NEJMoa1010029) reported the results of a six month clinical trial which evaluated the effectiveness of a telephone monitoring service for patients that suffered heart failure. The authors of the study, based at Yale University, reported that this service had no impact on the health of the patients when compared with a control group.
In the weeks to come, this article is certain to generate a lot of comment and debate. It provides some important results and we need to assess the issues fairly. But we may see a considerable amount of unwarranted conclusions and unfair criticism of remote cardiac monitoring, of telemedicine and even of the study itself. As we have seen before, once the facts and scientific analyses end, the generalizations and misstatements begin.
Beware of the headlines. The study was narrowly focused with biases in its design that do not allow broad generalizations. I have included below some facts about the study, and a few of the lessons learned and finally comments on the validity of the conclusions.
THE FACTS
The study used a random sample of 1,653 heart failure patients with a median age of 61 who were divided evenly into a test and control group. Over six months, the test group was asked to telephone a call center daily and follow automated prompts to a series of questions about their health status. At the end of the six months a comparative study was done on the rates of hospital readmission or death of the two groups. Using statistical analyses, it was determined that there were no appreciable differences between the groups. Based on these findings, the authors concluded: “Among patients recently hospitalized for heart failure, telemonitoring did not improve outcomes.”
Despite pre-study training, approximately 15 percent of the patients in the test group never started calling in for monitoring. By the end of the study 45 percent of the remaining test patients stopped making their monitoring calls. The comparative statistical analysis was based on the entire test group, including those that failed to participate in making the phone calls.
The authors of the study decided not to use automated data collection devices in the test patients’ homes nor were medication minder devices included. Although attempts were made to remind patients who failed to make their telephone calls, the study was based on the patients initiating the calls and self-reporting their information including weight gain or loss.
LESSONS
One of the most glaring outcomes (as well as one of the most glaring errors in the study design and data analysis) was the lack of patient participation in the test group. Of the 826 patients in the original test group, only 707 ever started making the monitoring calls and by the end of the study only 390 kept making the calls. Perhaps some types of monitoring devices in the home (or maybe just an annoying reminder beep) could have improved participation. Certainly greater efforts need to be made to engage patients in their healthcare, whether that is reducing weight, taking their medicine or using technology to monitor and report on their conditions.
An accompanying editorial in the NEJM focused on several concerns. For example, telephone monitoring services may not be asking the right questions of patients, suggesting that we need to reevaluate the appropriate ongoing physiological data that should be measured for heart failure. Also more timely staff follow-up with the test patients for corrective action may make a difference. Alternatively, smart software automatically generating diagnoses and treatment plans based on the data and supplementing health provider support, may lead to significantly improved outcomes.
COMMENT
In the Discussion portion of the article, the authors chose to take a swipe at vendors and use an incredibly broad generalization about all patient monitoring: “In an environment in which vendors promote their products to health systems that are under increasing pressure to reduce readmission rates, the knowledge that telemonitoring is ineffective suggests the need to consider alternative approaches to improving care.” Even worse, one of the authors chose to write an article on the study for Forbes.com with the headline: “Why Telemedicine is Overhyped.” This raises serious doubts about any intent to be fair and balanced.
Unfortunately, we don’t know whether the test group patients that participated in telemonitoring to the end of the six months showed any differences in their rate of re-hospitalization. It would have been helpful and relatively easy to show this data. It is troubling why this was not included.
The authors made a point of criticizing previous studies as using too small of a sample, but it does not appear that they ever conducted a literature search on the subject. Despite quoting several other studies, they failed to reference a landmark study of telemonitoring by the Veterans Administration, concluding positive results, that was based on a broad grouping of patients over a longer time span.
I am concerned that in the days to come we may witness misinformed discussions and articles about remote patient monitoring as a result of some unfortunate printed malware that slanders all of telemedicine. Let’s not throw the baby out with the bathwater, especially when the bathwater may just need adjusting.
Friday, November 19, 2010
Subscribe to:
Posts (Atom)