Medical Device Daily Washington Writer
WASHINGTON – The $1.1 billion in the economic stimulus package dedicated to comparative effectiveness research will help to stem the tide of rising healthcare costs and will provide resources that are sorely needed for payers, providers and consumers to make better healthcare decisions, said Winifred Hayes, CEO of Hayes Inc. (Lansdale, Pennsylvania).
There is a failure to effectively and efficiently integrate scientific evidence into healthcare decision-making in clinical practice, which results in overuse, misuse and, in some instances, underuse of health technologies, Hayes told a panel of government officials last week during a discussion about comparative effectiveness research.
The so-called "listening session" was held by the Federal Coordinating Council for Comparative Effectiveness – a congressionally mandated panel under the American Recovery and Reinvestment Act (ARRA) of 2009 – to gain public input on how a portion of the comparative effectiveness funds should be spent.
The 15-member panel is tasked only with providing advice to the Health and Human Services (HHS) secretary and does not carry any authority for making final decisions about the kinds of research projects that will be funded.
ARRA authorized $300 million for the Agency for Healthcare Research and Quality (AHRQ), $400 million for the National Institutes of Health (NIH) and another $400 million to be spent at the discretion of the HHS secretary for comparative effectiveness research. The council's work involves the $400 million for HHS.
The panel must submit a report to Congress by June 30, the same date a committee of the Institute of Medicine – also congressionally mandated – is required to submit a similar report to lawmakers on the $400 million HHS comparative effectiveness funds.
Comparative effectiveness research compares treatments and strategies to improve health outcomes, said AHRQ Director Carolyn Clancy, a member of the federal panel. "This information is essential for clinicians and patients to decide on the best treatments and for our health care system to achieve better performance," she said at the opening of last week's meeting, the first of three such planned events over the coming weeks.
The federal panel, which includes high-ranking government officials from several HHS agencies, including the NIH, the FDA, and the Departments of Veterans Affairs and Defense, heard from about 40 people representing various stakeholder groups at the three-hour meeting.
To generate the maximum impact from research dollars allocated by ARRA, there should be as much currently available data used as possible in comparing treatments, including post-approval studies and a wide range of other data sources from electronic health records to healthcare claims databases, most importantly, those databases belonging to Centers for Medicare & Medicaid Services and the Veterans Administration, said John Lewis of the Association of Clinical Research Organizations.
In addition, he contended, special attention should be paid to the methods and standards used to aggregate, analyze or record comparative effectiveness data. In allocating ARRA funds, Lewis said, priority should be given to organizations with successful and demonstrable track records of working with large amounts of data.
He insisted that meta-analyses of existing data are "an insufficient method to reach the desired research endpoint."
Instead, Lewis said, more and better clinical trial designs are needed.
Comparative effectiveness studies should capture all relevant aspects of diseases and their treatments using high standards of evidence, the Biotechnology Industry Organization (BIO; Washington) said in its written comments submitted to the federal council.
The trade group argued that comparative effectiveness analyses often ignore many important aspects of treatment interventions that affect patients or may not account for the spectra of disease severities.
Increased worker productivity, reduced caregiver burden and savings to other parts of the healthcare system also are important benefits that may not be reflected in studies conducted with a narrow perspective, BIO contended.
In addition, the group said, promoting innovation in personalized medicine requires clinicians to have the ability to make patient-centered treatment choices without conforming to inflexible standards or practice guidelines.
Jim Couch, chief medical officer at Patient Safety Solutions, urged the panel to consider the potential legal implications of comparative effectiveness studies. "Can they be used as a sword by plaintiffs' attorneys if there is noncompliance or compliance with bad results or can they be used as a shield if there is compliance even if there is a bad result?" he asked.
Steven Findlay of Consumers Union, publisher of Consumer Reports, said the comparative effectiveness funding provides a "unique opportunity" to develop strong and clear policies on conflict of interest in biomedical research.
While the U.S. has been "inching" its way towards more transparency and disclosure of conflicts of interest in clinical research, the new funding is an "important opportunity to make a leap in this area," he said.
Comparative effectiveness funded by the government, Findlay argued, "should strongly favor researchers and institutions that are devoted to doing this research in the public interest and who have no current conflicts."
"The very reason" the funding was mandated, he insisted, was because "too much industry-funded research fails to adequately answer the critical questions that can help doctors and patients make treatment decisions."
In addition to requiring government-funded comparative effectiveness research be a "conflict-free zone," the research must be mobilized to include the health outcomes of various racial and ethnic populations, Findlay said.
"We must end the shameful gaps that exist between the health status of some minority populations and other Americans," he declared.
Harold Miller, president of the Network for Regional Healthcare Improvement, encouraged the panel to make sure that the research is used and not just produced. "It does very little good to have a lot of comparative effectiveness research if people are not aware of it, if they don't understand how to use it and if there are barriers to both patients and providers being able to use it," he said.